Hack to the Future - Frontend
Table of Contents
- Hack to the Future - Frontend
- 1. Introduction
- 2. Setting the Time Circuits to the late 90s
- 3. The Early Web - Layout and Design Practices
- 4. The Plugin Era – Flash and Friends
- 5. The JavaScript Library Explosion
- 6. CSS Workarounds and Browser Quirks
- 7. Markup of the Past
- 8. Tools and Workflow Relics
- 9. Legacy Web Strategies
- 10. Tests and Standards of Yesteryear
- 11. What Still Matters - Progressive Enhancement
- 12. Lessons for the Future
- 13. Post Summary
1. Introduction
Context
So over the last few months at work, I've been conducting interviews to hire Frontend Developers for a number of new projects we have in the pipeline. It was only when looking at CV's that it struck me, a lot of these candidates weren't even born when I first started in my Web Development career! So I thought maybe developers getting into a Frontend Developer career today, may want to learn a bit about what it was like when I first started (that sentence just makes me feel old! 👴)
Looking back at “legacy” practices
Why would we want to look back on legacy best practices on the web? Other than the obvious academic and for general interest reasons?
Studying past best practices and legacy systems is crucial for understanding the evolution of technology and making informed decisions today. By examining the problems old practices were designed to solve, we gain a deeper appreciation for current best practices and avoid repeating past mistakes. As the philosopher George Santayana once said:
Those who cannot remember the past are condemned to repeat it.
This historical perspective also reveals enduring principles like progressive enhancement, which remains vital for creating accessible and resilient systems on the web.
Lessons we can apply today
For developers, understanding past methodologies is essential for properly maintaining and modernising existing systems in the future without causing critical failures. This historical knowledge will ultimately help them navigate the complexities of older codebases, to ensure they make informed decisions about how to update or replace components. Above all, reflecting on the past can help us come up with creative new ideas and prevent us from blindly following new trends. This perspective also provides a comprehensive view of how the web has evolved, grounding our current practices in a deeper understanding of the technology's history.
This process of building on past knowledge is a fundamental aspect of human progress. Just as civilizations learn from historical events to avoid repeating mistakes, developers can learn from the successes and failures of past technological eras. It's how humanity has always evolved. By building upon the accumulated wisdom and experience of those who came before us. By studying the mistakes and triumphs of the past, we improve our own work and contribute to the continuous cycle of innovation and learning that drives our entire industry forward.
2. Setting the Time Circuits to the late 90s
My first website build
In 1998, while working toward my GCSEs, I became interested in art and design, this was partly thanks to having an art teacher as my form tutor throughout secondary school. That influence, combined with the opportunity to take a double art GCSE for the same effort as a single GCSE, made the choice a pretty easy one! GCSE Art, here I come!
At the same time, I was already immersed in the emerging world of the internet, spending many hours online discovering a passion for many areas of computing and online gaming thanks to QuakeWorld Team Fortress, despite the frustration it caused at home by tying up the phone line all hours of the day, oh how I loved my US Robotics 56K modem, with its 120-150 ping! Integrated Services Digital Network (ISDN) or any form of broadband was still many years away for most people!
I was never exactly blessed with traditional artistic talent, painting, drawing, all of those art forms just wasn’t my thing. But I spotted an opportunity to combine my love of technology with the art curriculum. Back then, there were only about 2.4 million websites in existence worldwide. Most businesses and schools (including mine), were firmly offline. So, I proposed building a website for my final art project. To my surprise, my art teacher was absolutely thrilled with the idea. It turned out to be a first for the school and, as I later discovered, a first for the entire exam board too. Shock horror: I was ahead of the curve once. The curve has been safely ahead of me ever since.
I ended up creating a website for a fake record label, complete with a dreadful album cover, fictional artist, and made-up discography. Honestly, I wish I still had it! It was gloriously awful! I don’t recall much, but I remember the site used a <frameset> with three <frame> elements. The top frame displayed the logo, the left frame held the navigation menu, and the main frame was used for the page content.
The logo, by the way, was crafted in a program called 3D Text Studio (or something similar to that) that churned out spectacularly cheesy animated text like this! From a web performance perspective, that single GIF exceeded 2 MB. On a 56K modem, which was the standard connection for most users of the web at the time, that translates to a 6-minute loading time for just that GIF! Fortunately, it was never hosted online and was presented to the examiners directly from my local machine.
Long story short… the examiners loved my little website and I got a double A* Art GCSE for my effort!
So what's all this preamble leading too? Well, this is just a long-winded way to tell you (again) that I'm old… 😭
The late 90s web landscape
There have been some things I've noticed while questioning candidates in interviews recently, many candidates don't have the faintest idea of some old methodologies used in the world of Frontend, especially during the "unstable" periods of the web like the late 90s and early 00s:
- first browser war (1995–2001): Internet Explorer vs Netscape Navigator.
- second browser war (2004–2017): Internet Explorer vs Firefox vs Google Chrome.
Being a Frontend Developer in the late 90s was both fun in terms of innovation, but also exceedingly stressful due to the instability of the web platform! A prime example being cross-browser development. What worked in Netscape, often looked very broken in Internet Explorer (and vice versa)! And if you had clients who were looking for "pixel perfect" designs across all browsers, you were in for a bad time!
Throughout this period, a plethora of methodologies, tools, and workarounds were developed to address deficiencies in the web platform. And that’s what the rest of this post will delve into. Buckle up folks, we are about to time travel to an era when the internet started with the screeching of dial-up noises and I still had brown hair!
3. The Early Web – Layout and Design Practices
Photoshop PSDs as the “single source of truth”
Using Adobe Photoshop Documents (PSD) as a single source of design truth was a very common practice in the early days of web design. This was particularly common when design and development teams were siloed. A designer would create a PSD file that was intended to be precisely what the website would look like in the browser.
Issues
There were no considerations made for page structure, behaviour, and interactions. These fixed layout PSD's encouraged bad practices like:
- Fixed page dimensions e.g. 1024px x 768px as a static canvas.
- 1:1 mapping of Photoshop file to web page, which was rarely achievable, especially given cross-browser inconsistencies with page rendering.
- Lack of fluid or responsive design. I realise responsive design wasn't "a thing" at this time, but could it have been adopted sooner if fixed-width PSD workflows hadn't ever taken hold?
- The technique was more suited to static layouts, like print design, rather than web design.
- There were issues tracking interaction states like anchors with hover, active, disabled, and focus.
- Dynamic content was difficult to visualise (e.g., the rendering of different lengths of text in the browser).
- Poor accessibility adaptations, (e.g., increased font sizes, high-contrast modes weren’t considered in the design files).
The only way to solve many of these issues would be to create multiple PSD's to hold all these different design assumptions. And in doing so, file management and design revisions would quickly become impractical and prone to being incomplete or inconsistent.
Broken team collaboration
The use of PSD's as the single source of truth broke how teams could collaborate and innovate. This was because:
- Developers would often have to interpret or translate the PSD design manually without the help of designers (e.g. due to siloed teams and strict job roles).
- Changes in the design required round-trips to designers, rather than being evolved collaboratively in code.
- Small team bottlenecks were common e.g. all design or development decisions needed to go through individuals rather than a whole team.
- Files became outdated rapidly leading to teams working on outdated designs without realising it.
- Designers often came up with designs that simply couldn't be built with the web technologies that existed at the time, especially when their designs were expected to work across different browsers.
Modern Alternatives
I'd like to think that designers using Photoshop for modern web design is a thing of the past, given the vast number of tools and techniques that are way more suited to the job than Photoshop ever was. Modern teams typically use:
- Design tokens and internal component libraries as the "single source of truth".
- Figma or similar tools with structured, token-aware components.
- Living style guides and code-driven prototypes (e.g., Storybook).
- Clear handoffs between teams using tools like zeroheight, or integrated design-to-dev platforms.
The advantage of using these modern collaboration tools enables design and development teams to share the same language and source of truth, rooted in reusable, well-tested, and accessible components.
Photoshop PSDs Summary
In the early days of my frontend career, slicing PSDs was second nature, but that workflow is now obsolete. Using Photoshop as a "single source of truth" leads to siloed teams, rigid layouts, and poor collaboration. It ignores responsiveness, accessibility, and the realities of modern web development. Today, tools like Figma, design systems, and component libraries enable faster, more inclusive, and collaborative workflows. If you’re still building from PSDs, it’s time to move on! As the web has evolved, it is imperative that we all do the same.
Frame-Based Layouts
The Frame-based layouts were introduced into browsers to solve a specific set of problems. These were:
- To allow static content like navigation menus to remain in place while only the main content of the page gets updated on navigation.
- To Reduce the amount of data transferred over the network, since only one part of the page would need to be loaded. This was important at the time as remember in the late 1990s and early 2000s, broadband for most people simply wasn't available. If you were very lucky (and had the money), you'd be able to get an Integrated Services Digital Network (ISDN) line installed in your home, but it was mostly online businesses that had the money (and justification) for this type of connection, even ISDN wasn’t particularly quick. Adjusted for inflation you'd be looking at £60 to £80 per month for a 0.128 Mbps connection!
- To simulate a more app-like experience before JavaScript (JS) and CSS became more standardised and mature.
Example
For those curious here's an simple example of an HTML page using frames:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd">
<html>
<head>
<title>Simple Frame Example</title>
</head>
<!-- Note no Body element: 2 vertical columns -->
<!-- 30% width the menu.html document -->
<!-- 70% width for the main content of the page -->
<frameset cols="30%,70%">
<frame src="menu.html" name="menuFrame">
<frame src="content.html" name="contentFrame">
</frameset>
</html>Notes:
- To use
<frame>and<frameset>you needed to use a specific HTML 4.01 Frameset DOCTYPE, in the index.html file. - In my example, for a single HTML page you'd have to maintain 3 HTML files (index.html, menu.html, and content.html).
- Each frame was like a mini browser window that loaded its own HTML document.
Problems
Unfortunately, there were a number of major issues with Frame-Based Layouts:
- Terrible user experience: the use and navigation of frames was confusing for users, since you effectively had multiple browser panes in a single page. The URL bar would often remain static even as the content of the page changed.
- Poor Accessibility: Screen readers and other assistive technology struggled to navigate frames, making it incredibly difficult for users with disabilities to understand the page content and overall page structure.
- Limited Search Engine Optimisation (SEO) compatibility: Even Search engines of the day struggled to understand the index pages built within frames. This lead to poor visibility in search results, as crawlers frequently failed to understand the relationship between the different frames.
- Navigation and Browser Compatibility: Because the back and forward buttons did not consistently produce the desired results, frames disrupted the navigation history, making it difficult for users to find their way around. The fact that different browser vendors weren't aligned with how frames should work in browsers lead to cross-browser issues too.
- Bad for security: Frames allowed for security risks like clickjacking. This is where an attacker gets a user to interact with a page that contains malicious content without the user even realising. Modern browsers now include protections to stop these types of security issues.
Modern Alternatives
- Modern CSS Layouts: Flexbox and Grid allow for responsive layouts without compromising navigation, accessibility, and SEO.
- Single Page Applications (SPAs): Frameworks like React, Angular, and Vue allow developers to load page content dynamically without the need for full-page reloads. Be careful though, these libraries come with their own inherent issues if not used correctly!
- Server-Side Rendering and Partial Updates: techniques like server-side includes, AJAX, or component-based rendering to update portions of a page efficiently.
Frame-Based Summary
As mentioned in the introduction at the start of this post, my first website was built using frames! I sincerely hope you never have to maintain a frame-based website! But given the enormity of the internet, it is almost certain websites exist somewhere out there, having been untouched for decades! If you do come across one remember to take a quick peek at the source code, it's like looking back in time! They once served a purpose in the early days of the web but are now considered obsolete. Their usage introduced more problems than they solved, and have been replaced with techniques that are more performant, accessible, and maintainable. Any modern website should be using semantic HTML, CSS-based layouts, and progressive enhancement.
Table-Based Layouts
In the late 1990s and early 2000s table-based layouts were a common technique for building a web page structure: A simple example of what this would look like is below:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Table Layout Example</title>
<!- Very simple CSS to modify the key elements in the table-based layout -->
<style>
body {
font-family: Arial, sans-serif;
}
td {
padding: 10px;
border: 1px solid #ccc;
}
.header {
background-color: #f2f2f2;
text-align: center;
font-weight: bold;
}
.nav {
background-color: #e0e0e0;
width: 200px;
vertical-align: top;
}
.content {
background-color: #ffffff;
vertical-align: top;
}
.footer {
background-color: #f2f2f2;
text-align: center;
font-size: 0.9em;
}
</style>
</head>
<body>
<table width="100%" cellspacing="0" cellpadding="0">
<!-- Header -->
<tr>
<td colspan="2" class="header">
My Table-Based Web Page
</td>
</tr>
<!-- Body -->
<tr>
<!-- Navigation -->
<td class="nav">
<ul>
<li><a href="#">Home</a></li>
<li><a href="#">About</a></li>
<li><a href="#">Contact</a></li>
</ul>
</td>
<!-- Main Content -->
<td class="content">
<h2>Welcome</h2>
<p>This layout uses an HTML table for structure, which was common before CSS-based layouts became standard.</p>
</td>
</tr>
<!-- Footer -->
<tr>
<td colspan="2" class="footer">
&copy; 2025 Example Company
</td>
</tr>
</table>
</body>
</html>Why was it used?
At the time CSS and layout techniques were inconsistent and unstable across browsers. Developers looking for stability in cross-browser rendering turned to tables in order to do this. At the time, tables offered:
- Predictable cross-browser rendering
- Control over alignment, spacing, and sizing
- Ability to nest elements in a grid-like structure
It was very common to see nested tables and transparent "spacer GIFs" in invisible table cells to control these layouts more precisely. You'd often find logo's, sidebars, navigations, footer, and content areas all laid out within a deeply nested HTML table in order to achieve the layout and design that was required.
Why was it so bad?
The first and hopefully most obvious point is that the <table></table> element was intended for the display of tabular data. The fact that it was used as a workaround for the lack of standardised layout techniques, shows the ingenuity of developers at the time.
Unfortunately, the use of tables for layout came with many considerable downsides, these included:
- Semantics: As mentioned, tables should represent structured data, not layout. Misusing them confuses assistive technologies and harms accessibility.
- Maintainability: Table-based layouts are challenging to read, modify, or scale. Small changes often require restructuring entire layouts.
- Responsiveness: They are rigid and not suited to fluid or responsive design, that was to come a number of years later.
- Performance: They delay rendering because browsers need to calculate the entire table layout before painting it to the page.
Is the technique still used?
There are some areas where table-based layouts may still be seen:
- Legacy code bases that desperately need to be refactored, I can imagine there are many internal systems across the world where table-based layouts are still used. I’d imagine the conversation about modernising goes something like this… "If it still works, why change it?". Very short-sighted I know!
- Table-based layouts are still widely used in emails due to the very limited support for CSS in email clients. It's not always the lack of support, it's the fact that many clients simply strip out any CSS in the process of rendering the email HTML.
- To give you an example of how bad it still is, from Outlook 2007+, Microsoft switched to Microsoft Word as the HTML rendering engine! And it's still in use today with Outlook 365! I did my fair share of HTML emails as a Junior Frontend Developer, the internationalised versions were the worst! Using the same table-based layouts for 19+ languages is never going to work well, especially with languages like German with their huge word length! Sorry… rant over!
- They are often still used in PDF generation tools e.g. data-driven print views: invoices etc.
Modern alternatives
Modern CSS offers clean, semantic, and powerful layout tools, including:
- Flexbox: One-dimensional layouts (ideal for nav bars, toolbars, etc.)
- CSS Grid: Two-dimensional layouts (ideal for full-page layout and complex structures)
- Media Queries: Enable responsiveness across devices
- Container Queries (still an emerging technology): Context-aware layout changes.
Table-Based Summary
Table-based layouts are a throwback to a bygone era, thankfully! The years of building HTML emails has scared me for life! As they were developed during a period in which CSS was inadequate for the task. Developers had to get creative to wrestle with browser quirks, and tables were the go-to workaround. Thankfully, these days, we’ve moved on to semantic HTML and proper CSS that actually does what we need (for webpages anyway). It’s cleaner, more flexible, and maintainable, and way better for accessibility.
Quirks Mode Layouts
This topic is covered in more detail later in the blog post, but I’ll briefly mention it here for completeness.
It's important to realise that Quirks Mode Layouts weren't only limited to Internet Explorer (IE). It originated with Internet Explorer, but it was not exclusive to IE. Not only that, it later became a cross-browser convention in order to preserve the compatibility with many web pages on the internet. As that's the primary rule to consider when rolling out any new technology changes on the web. Whatever you do, "don't break the web!".
For example, if a vendor released a new browser feature that wasn't backwards compatible with earlier versions of web pages, then you have a major issue as you've just broken the web! I talk about XHTML 2.0 later in the post, as it is a prime example of a proposed technology that would have broken the web. This backwards compatibility was the sole purpose of Quirks mode. It gave modern browsers the ability to switch between:
- Quirks Mode: Mimic pre-standards behaviour. Used for old, non-compliant pages.
- Standards Mode: Adheres to modern web specifications (W3C and WHATWG standards).
- Almost Standards Mode: The same as Standards mode only with one exception, table cell line-height rendering. This was to preserve layouts that used inline images inside HTML tables.
How were layouts triggered?
The browser decided which layout mode to use from the list above purely from the DOCTYPE used on the page. For example:
Trigger Quirks mode
This DOCTYPE will trigger Quirks mode layout:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">It looks valid, but it is missing the system identifier (URL) therefore it is a malformed DOCTYPE so Quirks Mode is triggered. A valid DOCTYPE is given below for comparison:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">That missing URL in the DOCTYPE is vital. Quirks Mode would also be activated if a page did not have a DOCTYPE or was not identical to the valid DOCTYPE given above in any way. IE even had a really nasty habit of triggering Quirks Mode if any character was output in the page source before the DOCTYPE. This included invisible characters and new lines and line returns too! As you can imagine, it made debugging issues an absolute nightmare!
Almost Standards Mode
The following DOCTYPE's will trigger Almost Standards Mode:
- HTML 4.01 Transitional (with full system identifier):
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">- HTML 4.01 Frameset (with full system identifier):
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd">- XHTML 1.0 Transitional:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">- XHTML 1.0 Frameset
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Frameset//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-frameset.dtd">Standards Mode
And lastly and most importantly for modern web development. This is the DOCTYPE you should be using to trigger standards mode in all modern browsers:
<!DOCTYPE html>This simplified DOCTYPE was brought in as part of the HTML5 Specification after 6 years of standardisation (2008–2014).
Why was this version created?
As outlined in all the examples above, previous DOCTYPE versions were:
- Long
- Error-prone
- Required both a public and a system identifier
- Affected rendering modes (Quirks, Almost Standards, Standards)
In order to solve these issues this the new DOCTYPE:
- does not reference a Document Type Definition (DTD) as (HTML5 no longer relies on SGML-based validation).
- only has a single purpose: to trigger Standards Mode in all modern browsers.
Quirks Mode Summary
As we have discussed above, Quirks Mode wasn't an IE exclusive layout mode. It was introduced into all browsers in order to "not break the web". To ensure your website uses Standards Mode, use:
<!DOCTYPE html>And remember it must be the first characters in the source code on the page!
Iframe Embeds for Layouts or Content
If you've already read the Frame-based Layouts section above, then this section will be very similar. Although both are now considered legacy techniques, they come with distinct differences.
Frameset
As I discussed earlier here's example <frameset> code:
<!-- column 1 25% width / column 2 75% width -->
<frameset cols="25%,75%">
<frame src="nav.html">
<frame src="main.html">
</frameset>- The
<frameset>tag completely replaced the<body>tag and allowed developers to split the browser window into multiple, scrollable, resizable sections. - Each section (
<frame>) loaded a separate HTML document (as seen in the code above). - This technique was intended to use them as a layout structure. e.g. different parts of the User interface (UI) came from different HTML documents.
- Navigation in one frame would control the content in another frame.
Inline frames (Iframes)
These were introduced later in the HTML 4.01 Transitional specification. Example <iframe> code is found below:
<body>
<iframe src="content.html"></iframe>
</body>You will immediately notice the difference, using an <iframe> doesn't replace the <body> tag, it embeds an external HTML page into the original page.
- The usage of iframes was to embed content into other pages, not structure the whole layout.
- iframes were used to load external or isolated content within a single, self-contained HTML page.
User Experience
Both methods came with issues, but iframes were slightly better but still caused significant problems:
- Embeds could be styled and resized, but remained isolated from the parent.
- Navigation, SEO, and accessibility were still severely impacted when misused.
Standards and Browser Support
Framesets were deprecated and completely removed from the HTML5 specification, modern browsers either no longer support them or support is very limited without the correct DOCTYPE.
Iframes are still a part of the HTML5 specification, but they come with limited use cases, including:
- Secure sandboxing
- Third-party embeds
In fact, the performance of iframe embeds has recently been improved by browsers vendors adding support for loading="lazy" on iframes thus allowing the browser to delay loading until the iframe is in the viewport.
Security
Thankfully, iframes are more secure on the modern web thanks to attributes like:
Although the use of iframes still needs careful consideration for content integrity, cross-domain policies, and maintainability. They can still introduce security vulnerabilities such as:
- Clickjacking
- Cross-site scripting (XSS) exploits.
- Cross-origin resource sharing (CORS) exploits.
Can I still use them?
Yes, iframes are still a part of the HTML5 specification, so you can still use them, but you should try to only use them in specific scenarios where required. For example, when you need to embed third-party content into a website. This is often seen in banking on the modern web, when you are paying for something online. The final step of the transaction frequently loads an iframe from your bank asking to verify your purchase, usually via some form of Multi-factor Authentication (MFA) e.g. SMS, banking app approval on your phone. While iframe embeds aren't specifically required for PCI compliance. Many online merchants tend to implement them along with 3D Secure authentication to help reduce PCI scope and exposure.
Modern alternatives
As mentioned in the Frame-Based Layouts section, there are a number of modern alternatives rather than using iframes for page layout, modularisation, and code isolation. These include:
- Modern CSS Layouts: Flexbox and Grid allow for responsive layouts without compromising navigation, accessibility, and SEO.
- Single Page Applications (SPAs): Frameworks like React, Angular, and Vue allow developers to load page content dynamically without the need for full-page reloads. Be careful though, these libraries come with their own inherent issues if not used correctly!
- Server-Side Rendering and Partial Updates: techniques like server-side includes, AJAX, or component-based rendering to update portions of a page efficiently.
Iframe Embeds Summary
Using iframes for modularisation and isolation is now seen as a legacy approach and should generally be avoided on the modern web. That said, they still have a place for things like third-party embeds. I remember the early days of Facebook marketing, iframes were everywhere and an absolute nightmare to work with, especially when trying to get the sizing right within their UI. Urghh!
Pixel-Perfect Design
Pixel-perfect design is a now outdated design approach that was once considered the gold-standard in frontend development. The presented UI on the web page was intended to be a pixel-perfect replica of the design mockups.… If that sounds like a ridiculous and totally unworkable strategy, then I can confirm, yes it was!
What the technique means in practice
- Strict visual fidelity: Developers were expected to reproduce every element of a design using the exact measurements, colours and fonts given in a static design file like a Photoshop Document file (PSD).
- Close alignment with design tools: The priority while using this technique was to match the layout in the design document, no matter what. Inconsistencies across various browsers, not to mention differing screen sizes and user preferences, frequently made this an impossible task.
- Used in fixed-resolution environments: The technique worked reasonably well in the era of desktop-only websites with very few fixed screen sizes (e.g. 800×600, 1024×768).
Why is it considered legacy?
- Responsive design is now standard
- Modern devices now range from the size of watches, all the way up to ultra-wide monitors and TV's and everything in between. Pixel perfect design fails on both smaller and larger size screens.
- Accessibility and user preferences
- Modern Accessibility-first design requires designs to be flexible in terms of layout and scaling. For example, WCAG 2.2 Success Criterion 1.4.4 expects text to be scaled up to 200% without loss of content or functionality. This simply can't be done when using the pixel-perfect design technique.
- Performance and Maintainability:
- It encourages the use of hardcoded CSS rather than using scalable design systems and components.
- Modern Design Tools and Systems
- Design Intent Over Exact Replication
- Modern web design focuses on intent and consistency across contexts, prioritising usability and accessibility over pixel-perfect replication by leveraging newly introduced responsive browser technologies.
Can I still use it?
I hate to be the bearer of bad news, but no, you can't. It simply isn't compatible with the modern web, for all the reasons I've given above. You are better off understanding what the technique was trying to achieve and focus on modern-day alternatives.
Modern Alternatives
- Use of Design tokens for spacing, colour and typography.
- Modern browser layouts like CSS Grid and Flexbox.
- Component libraries and Design Systems that abstract repeated patterns.
- Focus on the user content, not the dimensions of the UI viewing it.
Pixel-Perfect Summary
Pixel-perfect design was always a terrible design technique that set everyone up to fail with the expectation of pixel accurate designs across browsers and devices. It served its purpose when the web was a lot simpler than it is today. A pixel-perfect design on today's web is a clear indication of an outdated aesthetic, urgently requiring a modern overhaul.
Fixed Pixel Layouts
There's a reason why I've mentioned Fixed Pixel layouts directly after Pixel-Perfect Design. They are related, but have differing concepts. They both emerged in an early era of web design, but they serve different purposes.
Pixel-perfect Design
As you will have just read above, pixel-perfect design emphasises the precise implementation of a design on a website. The output of the website should be identical to the design, down to every pixel. For this reason, it is a philosophy, not a layout strategy.
Fixed Pixel Layouts
Fixed pixel layouts are a layout strategy where all widths, heights, and positions are defined using fixed pixel values (e.g. width: 1024px). This means it the page layout doesn't adapt to different screen sizes or resolutions. They are rigid and non-responsive, and are optimised for a single resolution or screen size. Because of this Fixed Pixel Layouts break on small screen sizes (like mobiles), or large displays. They are very much associated with older websites and legacy intranet tools. If you look at early websites on the web they would often say "Best viewed with Internet Explorer (or Netscape Navigator) at 800×600 (or 1024×768)". These are prime examples of where Fixed Pixel layouts were used.
Fixed Pixel Summary
Fixed pixel layouts and pixel-perfect design are outdated. They are relics from a time when monitor sizes were uniform and browser zooming was uncommon. Pixel-perfect demands you match the design mockup exactly, which usually ends in fragile CSS and breaks the moment someone dares to increase their font size or use a high contrast mode. Fixed pixel layouts take it even further by hard-coding dimensions, making sites fall apart on anything that isn’t a desktop from 2010. Adding to their drawbacks, neither option integrates well with modern accessibility standards, responsive design principles, or current web usage patterns.
CSS Floats for Layout
CSS floats were my go-to layout technique for many years, simply because there really wasn't a viable cross-browser alternative (other than Tables, which I covered earlier in the post). Floats were originally intended to be used for wrapping text around images on a page. They weren't intended to be used for full-page layouts. At the time, designers and developers did this as a creative way to make certain layouts, not because that was the design goal of the CSS specifications.
Common issues
Float-based layouts came with a major issue: fragile designs often required “clearfix” hacks to stop float containers from collapsing.
Example clearfix hacks
Here are a few example hacks that ensured the float container would wrap around the floated elements:
Modern (Recommended):
.clearfix::after {
content: "";
display: table;
clear: both;
}Usage:
<div class="clearfix container">
<div style="float: left;">Left</div>
<div style="float: right;">Right</div>
</div>Overflow Hidden (Quick Fix):
.container {
overflow: hidden;
}This was my personal go to for "clearfix" as it was so simple to add to the CSS. A nasty downside of this method was it would hide any content that needed to overflow outside the float container, so its usage really depended on the design requirements.
Float the Container Itself (Not Recommended):
.container {
float: left;
width: 100%;
}I don't remember really using this method simply because it could interfere with other layout elements on a page and could create unexpected layout shifts during page rendering.
Using display: flow-root (Modern, Clean Alternative):
.container {
display: flow-root;
}This is far too modern for me, it didn't exist when I was still building UI's regularly. As, it has been stable in major browsers since January 2020. The advantage of this method is no pseudo-elements or hacks required. This is recommended for use if you are modernising code and want the cleanest approach without adding extra markup. Other float-based issues include:
- Poor readability: Additional markup was required just for the container clearfix hacks. This also impacted maintainability too.
- Inconsistent behaviour: Floats tended to behave differently depending on the browser in use, although not as problematic as it was, it still requires more testing to ensure cross-browser UI consistency.
- Stacking issues: Aligning text, or centring horizontally and vertically, is non-trivial.
Modern Alternatives
As with most of the topics in this section of the post, there are a list of modern alternatives available for float-based layouts.
- Flexbox: One-dimensional layouts (ideal for nav bars, toolbars, etc).
- CSS Grid: Two-dimensional layouts (ideal for full-page layout and complex structures).
- Media Queries: Enable responsiveness across devices
- Container Queries (still an emerging technology): Context-aware layout changes.
Performance and Maintainability
Modern layout techniques:
- Lead to cleaner and more maintainable code.
- Reduce reliance on utility classes for clearing float-specific bugs.
- Enhance responsiveness and accessibility by ensuring that the HTML code's structure is more predictable.
Legacy Support Considerations
Floats still appear in legacy code, so understanding them remains important for maintenance and modernisation. However, they should be avoided in greenfield projects unless strictly necessary (I'm open to suggestions if anyone has an idea as to what this necessity might be).
CSS Floats Summary
Using CSS floats for layout is now considered an anti-pattern. While historically important, floats have been replaced with modern layout systems like Flexbox and Grid, which offer cleaner, more maintainable, and more powerful solutions. In the future as the web platform evolves, newer layout techniques such as CSS Subgrid, Container Queries, and Anchor Positioning are also progressing through standardisation and will further improve layout flexibility. Avoiding floats is a key best practice when building or modernising frontend architecture.
Faux Columns
I'm not sure why the French word "Faux" was chosen for this technique, rather than just "false" or "fake", maybe to make it sound more appealing? Or more complex than it actually is? The term works though, as in English, it is used to describe something made to look like something else, which is precisely what this technique did.
What is it?
In the early days of CSS there was no reliable way to make two or more columns stretch to the same height when the content length in the columns varied. This was because floats or inline-block elements don't align their heights, so developers looked for a workaround.
How it worked
It was a clever workaround actually, it typically worked like this:
- A background image was applied to the parent container of the floated columns (that designers wanted to look to be equal in height)
- This background image was often a vertical gradient or a solid block of colour. This image was then repeated (
repeat-y) vertically down the parent container, giving the illusion ("fake") that the columns were of equal height. - As each of the inner floated elements extended due to varying content within, the container background image extended with it. Very clever, huh!
The CSS for this "workaround" was as simple as this:
/* container with the background repeat vertically */
.container {
background: url('faux-columns.png') repeat-y;
}
/* Note the width of this column, it is important */
.left-column {
float: left;
width: 200px;
}
/* Use an identical margin to the width, to "push" the right column into place next to the left column */
.right-column {
margin-left: 200px;
}The background image of the container would have 2 sections, one colour for the left column and another for the right column, thus tricking a users eyes into seeing the columns having equal heights!
Check out the Faux Columns A List Apart (ALA) article by Dan Cederholm from January 2004, if you are still unsure how it works, he explained it a lot better than I have!
Limitations
Unfortunately, as with all of these early CSS workarounds, there were limitations. The faux columns technique was no different. These limitations included:
- Not adaptable: it required the background image to perfectly match the layout dimensions of the container, and it's inner "columns".
- Unresponsive: Exactly what the word says, this only worked for fixed layouts, fluid of more dynamic layouts simply broke the illusion.
- Maintenance: The technique was difficult to maintain, since any layout change required editing the CSS, background image, or both.
- Poor semantics: Yes, it solved the visual presentation problem, but the underlying code wasn't semantic, the additional
<div>'s just for layout purposes, inherently held no semantic meaning.
Modern Alternatives
As you would expect, there are modern alternatives that make this layout trivial:
Flexbox
With Flexbox, it really is this simple:
.container {
display: flex;
}
.left-column, .right-column {
flex: 1;
}CSS Grid
In CSS grid, it's even easier as all child elements in a row are explicitly defined and align by default, no extra CSS required:
.container {
display: grid;
grid-template-columns: repeat(2, 1fr); /* 1fr = 1 fraction unit */
}Can I still use it?
Errr, silly question, no not at all! Look how effortless the modern alternatives above are! Imagine how good it feels to rip out the legacy faux column code from a legacy codebase, and replace it with 1 or 2 lines of CSS!
Faux Columns Summary
The Faux Columns technique was one of those clever hacks we leaned on back when CSS didn’t give designers and developers much to work with. It did the job, but it was fragile and fiddly, and you were always one layout change away from breaking it. These days, it’s more of a historical curiosity. Flexbox and Grid have long since made it obsolete, and with newer tools like Subgrid and Container Queries coming through the standards process, we’ve moved on from trickery to browser tools that are actually built for layout.
Zoom Layouts (using CSS zoom for responsiveness)
Back when responsive design was first emerging, a technique called Zoom Layouts emerged in order to scale whole elements within a UI. This technique emerged because responsive CSS layout techniques were very limited.
Example
An simple example of this is given below:
.container {
zoom: 0.8;
}This CSS is easy to understand, it simply scales the entire .container by 80%.
When was it useful?
This technique was useful when you needed to shrink or enlarge an entire layout without refactoring a fixed-width design. It also came in handy when working with legacy layouts that could not adapt with fluid widths or media queries. Lastly, it was used as a workaround before widespread browser support for the transform: scale() CSS property or relative units like rem, em, %, vw, and vh.
Why is it outdated?
There are a number of reasons as to why the Zoom Layout technique is now outdated. These include:
zoomis non-standard and inconsistent: Thezoomproperty isn't a part of any official CSS specification. It is a proprietary feature initially supported by Internet Explorer, then later Chromium-based browsers. Interestingly, Firefox and Safari have never supported thezoomproperty, making cross-browser layouts using the technique very tricky.- Causes Accessibility issues:
zoomdoes affect the layout scaling, but it doesn't interact well with user-initiated zoom or accessibility scaling preferences. Thus, using this technique can create barriers for users with visual impalements that rely on native browser zooming or OS-level zooming tools. - Breaks layout semantics: zoomed elements don't always reflow correctly, for example text can reflow outside its container, images can become blurry, and form elements may not align correctly when scaled.
- Modern CSS has better solutions: As with most outdated techniques in this post, modern browsers now support much better layout techniques and relative units, that make responsive design much more consistent and easier to maintain. These include Flexbox, CSS Grid and rem, em, %, vw, vh units. Along with Media Queries and container queries, this gives developers the ability to adapt individual elements proportionally, rather than resorting to scaling the entire UI.
- Performance issues: The use of
zoomcan cause serious performance issues, especially on low-powered devices since the rescaling causes the browser to scale rasterised layers rather than reflow content natively, which increases UI repaint costs.
Can I still use it?
Seriously, only if you hate your users and love additional maintenance. In practical terms, using it would not be a responsible choice; avoid it. If you come across a critical legacy site using this approach, plan to refactor it with modern techniques. Build your layouts using CSS Grid or Flexbox for flexibility across breakpoints, implement fluid typography with clamp and viewport units, adopt container queries for component-level responsiveness, rely on viewport-based units for consistency, and always test with browser zoom and assistive technologies to ensure accessibility and adaptability for all users.
Zoom Layouts Summary
Using zoom for layout responsiveness is an outdated, non-standard technique that can compromise accessibility, compatibility, and performance. Modern responsive design principles provide far more robust, scalable, and accessible solutions.
If you require a transition approach for legacy systems still using Zoom Layouts, consider refactoring incrementally to CSS Grid and Flexbox combined with relative units like rem or percentages to modernise their responsiveness. Luckily for you, this isn’t the last time you’ll hear about the infamous proprietary zoom property, as it makes quite a few appearances later in the blog post when we dive into those classic IE layout quirks.
Nested ems instead of Pixels
Before the CSS rem unit was added to the CSS Values and Units Module Level 3, developers used the em unit as a responsive strategy to avoid fixed pixel font sizes. Having used it for years I can confirm it was a real pain in the ass to work with (pardon my French!). When using an em unit, both the CSS font size and spacing were sized relative to their immediate parent container.
For example, given this HTML:
<body>
<div class="container">
<p class="child">Some text here</p>
</div>
</body>And this CSS:
body { font-size: 1em; }
.container { font-size: 1.2em; } /* relative to body */
.child { font-size: 1.2em; } /* relative to .container, so compounded */Can you guess what the .child font size of the text is in pixels?
Better get your Math(s) hat on, let's go through it!
Default body font size = 16px so 1em x 16px = 16px. The
.containerDIV is relative to the body font size so:.containerfont size = 1.2em x 16px = 19.2px. The.childparagraph is relative to the.containerfont size so:.childfont size = 1.2em x 19.2px = 23.04px
That's right, that well-known font size 23.04px!
Now this is just a very basic example, imagine if you include em units for margins and paddings too! And also layer on additional nesting! Hopefully, you are starting to realise how painful em units were to use on a website, especially when the only viable alternatives were percentages (which had the same relative nesting issue and were even less intuitive to use than em), or CSS keywords e.g. font-size: small, medium, large, x-large, etc. As you can see, there weren’t a lot of viable or maintainable options in terms of responsive typography and spacing in the early responsive design era (around 2010-2013).
Why it's outdated?
- Complexity and unpredictability: Nested ems lead to compounded calculations as we saw in the simple example I gave above, making sizing unpredictable in deeply nested components. A small change in a parent font size cascades unexpectedly and could completely obliterate your well-crafted layout.
- Maintenance overhead: Adjusting layouts or typography with nested ems quickly creates brittle CSS and significant technical debt, especially when ems are used for spacing like margins and padding.
- Inconsistent UI scales: Components may render differently in different contexts if they rely on em units, especially in large applications with diverse layout containers.
Modern Alternatives
You can utilise several modern options for nested em units. These include:
- rem units for consistent global scaling relative to the root font size
- Clamp-based fluid typography for responsive design, for example Utopia.fyi.
- CSS custom properties (variables) for consistent, maintainable scales
Can I use them today?
You could, but I have no idea why you would! When more viable alternatives exist today like rem units for global scaling, clamp for fluid typography, and CSS variables for maintainable scales, why make life harder than it needs to be??
Nested ems Summary
Using nested em units is outdated. It adds unnecessary complexity and unpredictability. For modern responsive design you are far better off using rems for consistent global scaling, or taking advantage of the clamp CSS function if you are feeling adventurous. Lastly, you could always use modern CSS variables for more consistent and maintainable code.
Setting the browsers base font size to 62.5%
As a direct follow on from the nested em technique earlier in the post, there was an alternative that developers came up with to simplify the math(s) behind percentages (since they had the same "relative to parent" issue as em's). When developers decided to use percentages for fonts, they often set the font size on the <html> element to:
html { font-size: 62.5%; } /* default font size now 10px not 16px, due to scaling making `em` units easier to work with (base-10 rather than base-16) */
.container { font-size: 1.6em }/*16px*/
.container { font-size: 2.4em }/*24px*/
.container { font-size: 3.6em }/*36px*/This avoided complicated fractional calculations when using em units:
- Without the percentage: 1em = 16px → 24px = 1.5em.
- With the percentage: 1em = 10px → 24px = 2.4em.
You still had the problem with nested elements, but that was later fixed by using rem units (root em).
Why are these techniques less common today?
- It overrides user defaults: Some users may increase their base font size from 16px for accessibility reasons, hard-coding the base size to 62.5% undermines this user preference.
- Modern teams work with
rem: Most developers and teams now accept that 1rem = 16px and use design tokens, variables, or a spacing scale instead of forcing a base-10 (62.5% hack) mental model. - Simplicity from Modern tooling: Design systems, utility classes, and CSS variables handle sizing scales more predictably without the 62.5% hack.
Can I still use it today?
No, not really, mainly due to the list of reasons I've given above. font-size: 62.5% was merely a developer convenience hack to make 1em / 1rem equal 10px for easy math(s). Look at the short list of modern alternatives I have listed above instead.
Base font size Summary
As mentioned above, this math(s) hack for easier font sizing is no longer required on the modern web, in fact it should be avoided due to the impact it has on users who change their base font size for accessibility reasons. Look to use one of the more modern techniques mentioned in the "Why are these techniques less common today?" section above.
Fixed-Width Fonts for Responsive Text
Fixed-width fonts are better known as monospaced fonts, they allocate the same horizontal spacing to each character. For example:
.mono-spaced-font {font-family: Courier New,Courier,Lucida Sans Typewriter,Lucida Typewriter,monospace;}The example provided above demonstrates how to render text in a monospaced font and concurrently defines a monospaced font stack for most web page implementations. The reason I say "most" is because Windows has a 99.73% support for Courier New and OSX has 95.68% support according to CSSFontStack. Which is why it is listed first in the font stack, for the less than 1% of users that don’t support it, the browser will look for Courier and so on, until it gets to the end of the font stack, where it just tells the browser to use any monospace font the system has available.
Historically, monospaced fonts were used for:
- Terminal emulation.
- Code editors for alignment.
- Early web design, where the layout predictability was prioritised over aesthetics or responsiveness.
Why was the technique used in responsive text?
Developers and designers struggled with the web platform's limitations at the time due to a lack of suitable tools. So monospaced text was usually used for:
- Consistent character spacing across browsers.
- Easier text alignment in table-based layouts.
- Simplifying calculations for layout sizing, since browser layout strategies were much less mature than they are today.
Why is the technique outdated?
The technique is now considered outdated for various reasons, including:
- It limits design flexibility. Modern responsive design has moved on from fixed typography, as fluid typography is now possible, which is better served by proportional fonts that adapt visually to varying screen sizes and reading contexts.
- Monospaced fonts are harder to read, especially for paragraphs or long text blocks. This requirement on the modern web is critical for accessibility-focused design.
- Instead of outdated methods, modern CSS offers enhanced tools and support for contemporary layout techniques. Flexbox and CSS Grid, coupled with various typography scaling units like
rem,em,vw,vh, andclamp(), enable more predictable and reliable layout control. - There's no performance difference between modern proportional fonts and monospaced fonts, they both have similar browser overhead, so why choose a technique that is harder to maintain and comes with a whole host of other disadvantages?
What's a modern replacement?
There are a number of modern alternatives, some of which we touched on above. These include the use of:
- Fluid typography with CSS
clamp()and viewport units to ensure text scales responsively across devices. - Proportional fonts with font fallback stacks to optimise readability and layout adaptability.
- Only using monospaced fonts for semantic or functional reasons, not aesthetics. Code blocks and tabular data are prime examples of where monospaced fonts should be used to enhance readability of these certain areas of a website. Adventurous designers can even transform a web UI into a retro Terminal window with these elements, though readability must be carefully considered.
Can I still use it?
As stated repeatedly in this section of the blog post, while technically possible, it would be highly illogical to employ this technique. Given the numerous disadvantages outlined earlier in this section, utilising such an antiquated method on the modern web would be ill-advised.
Fixed-Width Fonts Summary
There are so many font options available to developers and designers today. There is no way you should ever use a monospaced font for anything besides sections of code, or possibly text in a data table, depending on what type of data you are wishing to display. In both of these cases, a monospaced font can enhance readability if used correctly.
4. The Plugin Era – Flash and Friends
Flash-Based Content
I distinctly remember having a conversation with a then colleague regarding iOS not supporting Flash content and how it was the beginning of the end of Adobe Flash (Flash) on the web. At the time he refused to believe it, but thankfully for the web, my prediction came true!
What was Flash-based content?
Flash was a proprietary multimedia software platform developed by Adobe. It was used to:
- Deliver animations, video, and interactive content via a plugin in the web browser.
- Enable rich media applications embedded in websites.
- Power early interactive interfaces on the web, this was way before the web platform matured and could support these types of interactivity natively.
- I, personally, remember it for flash-based advertisements of which I created many when I was first starting out in web development!
Why was it popular?
Flash was hugely popular at the time due to the fact that:
- Cross-browser multimedia support was lacking on the web platform (i.e. no native support)
- Advanced vector animation support
- In 2005, Flash was the sole method for streaming audio and video on the web, as exemplified by YouTube's reliance on it.
- Interaction was programmed through the use of ActionScript. If it sounds very familiar to JS, that's because it is, as they are both based on the ECMAScript Standard. That's a massive oversimplification, but if you are curious, read all about it on Wikipedia.
- In the late 1990s there was a popular trend on the web of having completely pointless Flash intros that would load and play automatically before you entered a site. There are countless example of these intros on YouTube if you are interested!
Why was it deprecated?
There are many reasons as to why Flash is now deprecated. These include:
- Flash was well-known for serious security vulnerabilities, which were often used for malicious software and browser takeovers.
- Flash content often consumed significant CPU and memory, leading to poor performance and excessive battery drain on mobile devices.
- As mentioned earlier, Apple refused to support Flash on iOS, citing security, performance, and stability concerns, which contributed heavily to its decline.
- Adobe's proprietary Flash technology was incompatible with open web standards, hindering accessibility, interoperability, and sustainability.
- Lastly, open web standards and the web platform evolved to replace Flash with native (and non-proprietary) functionality like:
- Native video and audio playback (
<video>and<audio>tags) - CSS animations and transitions
- Canvas and WebGL for interactive graphics and games
- SVG for scalable vector graphics
- Native video and audio playback (
Flash met its end with the advent of modern web APIs, including HTML5, CSS3, and modern JS.
In 2017 Adobe announced that Flash's end of life would be in 2020. In December 2020 Adobe released the final update for Flash. By January 2021, major browsers disabled Flash by default and eventually blocked Flash content entirely.
Can I still use it?
At last! A straightforward answer to this question: No, it's impossible for you to use Flash on the modern web as Flash content in no longer supported in any modern browser. Simple! RIP Adobe (Macromedia) Flash 1996 to 2020. You won't be missed.
Modern Alternatives
As mentioned above, there are a number of native browser-based alternatives for Flash (HTML5, CSS3, Modern JS) functionality these are: - Native video and audio playback (<video> and <audio> tags) - CSS animations and transitions - Canvas and WebGL for interactive graphics and games - SVG for scalable vector graphics
Scalable Inman Flash Replacement (sIFR)
Before the introduction of the @font-face at-rule, which is defined in the CSS Fonts Module Level 4 specifications, Web Designers and Frontend Developers were desperately seeking to expand the limited number of cross-browser, and cross-operating system fonts that were available on the web. In order to achieve that, a number of workarounds and general ingenious browser hackery were built. From the name of this technique you may also be able to guess the answer for the "Can I still use it?" section!
What is sIFR?
Scalable Inman Flash Replacement (sIFR) was an creative technique that used JS and Flash together to replace HTML text elements with Flash-rendered text. This feature allowed developers to embed custom fonts directly within the Flash file. Consequently, they could modify the HTML text, and the Flash file would dynamically render the updated content.
This workaround was required at the time because there was very limited support for custom web fonts via the use of @font-face. Surprisingly, @font-face was first introduced by Microsoft in Internet Explorer 4 in 1997, using Embedded OpenType (EOT) as the font format. This was proprietary to IE, so no other browsers supported it. Since there wasn't a cross-browser way to use custom fonts, alternative techniques like sIFR emerged.
sIFR Popularity
sIFR emerged in the early to mid-2000s, with its first public version released around 2004-2005. sIFR was widely used until around 2009-2010, especially for headings and branded typography. Its popularity grew during that period due to the technique's preservation of SEO and accessibility advantages. This was because the original HTML text remained in the Document Object Model (DOM), allowing it to still be readable by search engines and assistive technology. And once setup it was simple to update the underlying text and sIFR took care of the rest. There was also the added bonus that the text remained selectable, so could be copied and pasted when needed. It sounds like a great solution, so where did it all go wrong?
Why it's outdated?
There are several reasons why the sIFR technique is now outdated. We covered the main one in the previous "Flash-Based Content" technique above:
- It relies on the Adobe Flash Player browser plugin, that is now deprecated and blocked in all major browsers due to security vulnerabilities and performance issues.
- It slowed down page performance by increasing page load time, due to having to download the Flash assets.
- Although it was partially accessible from an HTML perspective, it certainly wasn't perfect as it introduced accessibility and compatibility issues on devices without Flash support.
- Web standards came to the rescue. CSS3 brought with it native cross-browser support for
@font-facefor custom fonts, without the need for any browser plugins. The new standards supported Web Open Font Format (WOFF and WOFF2) font formats which are a standardised and optimised custom font format for the delivery of fonts on the web. Basically, HTML5 and CSS working together simply removed the need for plugin-based typography workarounds.
Can I still use it?
As I mentioned, above this is a pretty simple question to answer… No, not at all, the removal of Flash from all modern browsers used today guarantees that!
Modern Alternative
There's only one modern alternative that should be used on the web today: @font-face. An example of its usage is given below:
@font-face {
font-family: "MyCustomFont";
src: url("fonts/MyCustomFont.eot"); /* IE9 Compat Modes */
src: url("fonts/MyCustomFont.eot?#iefix") format("embedded-opentype"), /* IE6-IE8 */
url("fonts/MyCustomFont.woff2") format("woff2"), /* Super modern browsers */
url("fonts/MyCustomFont.woff") format("woff"), /* Modern browsers */
url("fonts/MyCustomFont.ttf") format("truetype"); /* Safari, Android, iOS */
font-weight: normal;
font-style: normal;
}Thankfully, modern browsers widely support WOFF, thus, simplifying the above code:
@font-face {
font-family: "MyCustomFont";
url("fonts/MyCustomFont.woff2") format("woff2"), /* Super modern browsers */
url("fonts/MyCustomFont.woff") format("woff"); /* modern browsers */
font-weight: normal;
font-style: normal;
}In fact, any modern web browser that supports WOFF also supports WOFF2. Therefore, the code you should use today is as follows:
@font-face {
font-family: "MyCustomFont";
url("fonts/MyCustomFont.woff2") format("woff2");
font-weight: normal;
font-style: normal;
}In all instances above you'd use the custom font like so:
.myfontclass {
font-family: "MyCustomFont", /* other "fallback" fonts here */;
}The browser will take care of the rest!
Note: that you should always provide a font fallback in your font-family property, just in case the font file fails to load, or it is accidentally deleted from your server. There is so much more to the use of @font-face. If you are interested in advanced topics around its usage you should definitely check out Zach Leatherman's copious amount of work on the subject over the years!
Cufón
Much like sIFR above, Cufón was created due to the lack of options when it came to using custom fonts on the web. It was popular around the same time as sIFR (late 2000s and early 2010s). It essentially was solving the same problem, but using a different cross-browser technique. Whereas sIFR used Flash, Cufón worked like so:
- Fonts were converted into vector graphics, and canvas (or VML for older versions of IE) this was then used to render the text in place.
- The JS then replaced the HTML text with the custom font rendered version of the text.
- Since it was JS-based, there was no need for any plugins (like Flash).
Why was it used?
As previously noted with sIFR, browser support for CSS @font-face was inadequate or inconsistent at the time. Designers and developers wanted to use custom fonts for branding and stylistic reasons without users having to install a plugin or the fonts locally. Cufón was attractive because it:
- Didn't require a plugin for it to work.
- Provided near pixel-perfect rendering of the custom font.
- Was easy to integrate with minimal JS setup.
Why is it outdated?
- Modern browsers all support
@font-face, a much better solution as it allows the direct use of web fonts like WOFF or WOFF2 files without the use of JS hacks. - Its usage impacted accessibility. Due to Cufón replacing text in the DOM with rendered graphics, screen readers couldn't interpret these replacements as text, thus degrading a site's accessibility.
- Cufón caused web performance issues as the text replacement script was run after page load, which increased page render time, blocked interactivity and degraded the overall performance, especially on slower devices.
- Although Cufón attempted to preserve the replaced text in the DOM, the results were often inconsistent, mainly because search engine crawlers at the time had inconsistent results when parsing JS-replaced content.
- Cufón didn't work with responsive design, once rendered the Cufón replaced text didn't scale correctly, unless the page was reloaded at the new page size.
Can I still use it?
Although the site is still available here, and the cufon.js script is still available to download. The font generator has been taken down and is no longer maintained. So to get it working you'd need to jump through quite a few hoops! So, what I'm really trying to say is: Yes, you can, but it isn't worthwhile. Even the original author Simo Kinnunen says on the website:
Seriously, though you should be using standard web fonts by now.
Modern Alternatives
Rather than repeat myself, I'll refer you to the same section from the sIFR methodology above.
GIF Text Replacements
Although I am aware that this is not a plug-in, it seemed like the most appropriate section to include it in since we are discussing font replacement techniques. Of all the custom font techniques I've listed, this one is by far the worst in my opinion. The technique was popular in the late 1990s and into the early 2000s, when there were very few other options for using custom fonts on the web.
What is a GIF Text Replacement?
It's a self-explanatory name. In order to use a custom font, a designer would create the static asset (usually via Photoshop or similar), then the developer would cut out the text as a GIF. This image would then be used to replace the HTML text on the page with the image, in order to make it look like a custom font was in use. Example code for this technique can be seen below:
<body>
<!-- Gif text replacement for a heading -->
<h1>
<img src="images/heading-text.gif" alt="Welcome to Our Website">
</h1>
<ul>
<li>
<!-- Gif text replacement for a navigation link -->
<a href="/about.html">
<img src="images/about-link.gif" alt="About Us">
</a>
</li>
</ul>
</body>Note: Some readers may wonder why a GIF was used instead of a PNG, since both support transparency. The main reason is that Internet Explorer offered poor support for transparent PNGs and required a complicated hack to make them work, which I will explain later in the post.
Why was it so bad?
- It was time-consuming and maintenance heavy. Should the design change, then so would all the GIFs that had to be manually cut out of the design files again.
- It was bad for Accessibility. Screen readers are unable to process text embedded within GIFs or images that lack meaningful alt text. The absence or outdated nature of alt text therefore created an exclusionary experience for users with visual impairments.
- It was bad for SEO. Search engines could not index text within images, thus, harming discoverability, this technique relied on developers ensuring they had accurate alt-text, which isn't always the case.
- It was bad for performance. At the time of its popularity, the web was going through a transition period from HTTP/1.0 to HTTP/1.1. Although HTTP/1.1 was better for TCP connections than HTTP/1.0, these TCP connections were very expensive in terms of web performance, and each of these GIF replacements required its own TCP connection, which increased page load times.
- It was terrible for responsiveness. Although the responsive web was a few years away when it was popular, the difference between images and text rendering was text's ability to scale across different devices and screen sizes. Images simply couldn't do that, leading to poor rendering and pixelation on some devices.
- GIF only supports 256 colours (8-bit) and for the GIF to be transparent, one of those colours would need to be transparent. So if your text had a complex colour palette, it either wouldn't work, or just look terrible.
Can I still use it?
No, it's as simple as that. It's a technique with so many negatives and so few positives, it should be confined to the interesting history of the web platform!
Modern Alternatives
Again, rather than repeat myself, I'll refer you to the same section from the sIFR methodology above.
Adobe AIR
I remember going to a conference around 2007 / 2008, and there was so much hype about Adobe AIR. It was going to be the "next big thing", due to the fact it could enable developers to create rich desktop and mobile apps using only web skills and technologies.
What was Adobe AIR?
The AIR in Adobe AIR stood for Adobe Integrated Runtime. It was a cross-platform runtime developed by Adobe. It allowed developers to use: HTML, JS, Adobe Flash, Flex, and ActionScript. All combined they could run as a standalone desktop or mobile application. It supported Windows and macOS for desktop, and later Android and iOS on mobile.
Furthermore, it also enabled:
- Running Flash-based applications outside the browser.
- Enabling rich multimedia, animations, and offline capabilities.
Why is it outdated?
- It relied on Flash and ActionScript. With flash reaching its end of life in late 2020 due to persistent security vulnerabilities, and the momentum behind open standards like HTML5, CSS3, and JS ES6+. AIR ceased to exist due to the loss of its core technology.
- A shift to modern cross-platform frameworks. The market moved towards more efficient and performant technologies like:
- React Native
- Flutter
- Electron (for desktop apps) The advantage of these frameworks is they use native components or JS runtimes, without the heavy reliance on Flash. This offered developers and users greater performance, maintainability, security, and community support.
- Lack of Adobe support. Adobe handed AIR over to a subsidiary of Samsung (Harman) in June 2019for ongoing maintenance. Support is still provided by Harman, but only for enterprises with legacy applications that they still rely on, There's no active innovation or features being added to AIR by Harman.
- Security concerns. As with Flash in the browser, security was always an ongoing issue, and this continued in AIR since it was the backbone to its core functionality. By continuing to build on AIR, it poses security risks and compatibility limitations with modern browsers and operating systems.
- Lack of developer interest and ecosystem. Developers on the modern web tend to favour open ecosystems with an active community for support and updates. Adobe AIR’s ecosystem has completely stagnated.
Can I still use it?
As with any other Flash-based technology, I'm afraid not, it is no longer supported and even if you could, there are more modern open frameworks you could use like, React Native, Flutter, or Electron (for desktop applications). AIR is now history, and if you are using an AIR application within your digital estate, it is strongly recommended you prioritise migration, due to higher maintainability, poor security, and lack of developer availability.
Yahoo Pipes
It is easy to forget just how dominant Yahoo was on the web during the late 1990s. Before Google emerged as the leading search engine, Yahoo was one of the primary gateways to the internet. Its peak influence was between 1996 and 2000, when it played a central role in how people accessed and navigated the web. It was the default starting point for most web users due to its curated directory, news, and email services combined. It was also a technology leader on the web too, as I mention later in the blog post when I look at their extensive JS Library: Yahoo! User Interface (YUI).
I remember using Yahoo Pipes for combining my many RSS feeds at the time, it really was a fantastic visual tool for data manipulation.

What was it?
Yahoo pipes was a visual data mashup tool that allowed users to aggregate, manipulate, and filter data from around the web. It was released in 2007, and it allowed developers (and non-developers) alike to aggregate, manipulate, and filter data from around the web. It provided a drag and drop interface where users could connect various manipulation modules and join them together by creating pipes between modules. You were essentially piping data "through" the tool (hence the name!) and the manipulated date would come out the other end. It was considered highly innovative at the time, and it was used a lot for rapid prototyping.
Why is it outdated?
Have a look at the Yahoo! homepage, and you will see it is the shadow of its former self, it looks to more of a news aggregation service now than a popular search engine. This was because Yahoo decided to make a giant shift in business strategy, moving away from developer tools and open web utilities to concentrate on advertising and media products. Although it was popular with technology enthusiasts, it was never a mainstream product for Yahoo, so the operational expenses vs. usage statistics didn't align with Yahoo's business priorities. Lastly, the web evolved beyond Yahoo Pipes, and it couldn't keep pace with the changes. Modern APIs, JSON-based services, and JS frameworks allowed developers to build similar data transformations programmatically with greater flexibility. Due to all these factors Yahoo Pipes was sadly shut down in 2015.
Can I still use it today?
Nope, it was shutdown by Yahoo in 2015, with no further support or hosting by Yahoo.
Modern Alternatives
While innovative at the time, visual mashups have been replaced by:
- Dedicated data transformation tools (e.g. Zapier, Integromat/Make).
- Serverless functions (AWS Lambda, Azure Functions, Cloudflare Workers, and Fastly Compute@Edge) for real-time data processing.
- Low-code platforms with integrated API management.
The web has also become a lot more complicated regarding Web scraping and feed aggregation. Because anti-scraping measures, authentication, and API rate limits weren't necessary when Yahoo Pipes was created, the techniques it employed didn't support the robust backend processes now required to handle these requirements. Although Yahoo Pipes was innovative at the time, it has long been discontinued and is now considered a obsolete part of web platform history.
PhoneGap / Apache Cordova
The one thing that sticks in my mind when I think about PhoneGap is when I saw a talk from one of the Nitobi engineers back in 2009 / 2010, he said something along the lines of:
We are using PhoneGap to bridge the current gap for developers in creating cross-platform mobile applications. Our goal is for it to become obsolete once native platforms fully support these capabilities.
This really impressed me at the time, spending so much time on a product with the aim for it to become obsolete.
What was PhoneGap?
PhoneGap was a mobile development framework created by a Canadian company called Nitobi in 2009, it was later acquired by Adobe in 2011. PhoneGap allowed web developers to build cross-platform mobile applications, simply, using web technologies: HTML, CSS, and JS. The applications were packaged into native containers allowing them to run as mobile apps, while also having access to device API's via JS.
Why was it required?
At the time of release, the mobile web was incredibly popular and getting bigger month on month. It's important to remember, the first version of the iPhone was released only 2-years before (June 2007). This really was an exciting time in the Web Platform's history. If you wanted to release a cross-platform application at the time and wanted to support Android, iOS, and Windows Phone, you needed developers with knowledge of multiple programming languages:
- Android required Java.
- iOS required Objective-C.
- Windows Phone required C#.
Finding a single developer with all these skills would be incredibly hard, so to build and maintain all 3 platforms usually required a whole team of developers.
One of the main advantages of PhoneGap was that all 3 platforms had a single central codebase, which reduced development time and maintenance.
PhoneGap leveraged Cordova under the hood, which it essentially branded and wrapped for broader adoption.
Why is it outdated?
- PhoneGap apps performed poorly. This was especially true for graphics-intensive, or animation-heavy interfaces. This is because PhoneGap apps ran within a WebView container rather than as a native application.
- Adobe stopped supporting it. This seems to be a common theme in this blog post… Adobe ended support for PhoneGap in October 2020. At the time developers were advised to either migrate to Apache Cordova or consider other frameworks.
- Alternatives evolved. As the mobile platform expanded, so did the availability of other frameworks to help developers build apps. These alternatives included:
- React Native allowing near-native performance with JS and React paradigms.
- Flutter enabling high-performance apps with a single Dart codebase and native compilation.
- Progressive Web Apps (PWAs) reducing the need for wrapping web apps as native apps in many use cases.
- Capacitor (by Ionic) providing modern native bridging with a streamlined developer experience compared to PhoneGap/Cordova.
- PhoneGap's Ecosystem growth stalled. As newer frameworks were released and Adobe stopped supporting it the community moved away and PhoneGap’s plugin ecosystem stagnated.
Can I still use it?
No, there are several alternatives I have listed above that you should consider instead. PhoneGap served its initial purpose as a bridge, enabling developers to build cross-platform mobile applications. As was its mission, It became obsolete when native platforms fully incorporated these capabilities.
Microsoft Silverlight
NOTE: I never used Silverlight (I do remember it being announced) I'm just adding it to the post for completeness.
What is Silverlight?
Silverlight was a rich internet application (RIA) framework introduced by Microsoft in 2007. It was conceptually similar to Adobe Flash, designed to deliver interactive multimedia, animations, and streaming video inside the browser.
It used a subset of the .NET Framework, with applications typically written in C# or VB.NET, and presentation defined using XAML (an XML-based UI markup language). Developers could reuse existing .NET skills, which made Silverlight attractive in Microsoft-centric enterprises.
Silverlight was often used for:
- Media streaming (notably Netflix in its early streaming days)
- Interactive dashboards and line-of-business web apps
- Cross-browser, cross-platform plugins (Windows and Mac were supported, but Linux support lagged)
Why is it considered legacy?
- Plugin dependency: By the 2010s browser vendors had moved away from browser plugins in favour of newly developed web platform technologies. Plugins were often unsecure, unstable, and inaccessible.
- Limited cross-platform reach: Although Silverlight was well supported on Microsoft platforms (as you would expect!), it also had support on Mac, but it had limited support on Linux (via project Moonlight), and no support on mobile devices (Android, iOS).
- Rise of open web standards: HTML5, CSS3, and JavaScript rose so rapidly for audio, video and advanced graphics (via canvas). The use of plugins was no longer required.
- End of support: Considering the above points, Microsoft only stopped support in October 2021. Although Browser vendors stopped a long time before: Chrome in 2015, Edge never supported it. Firefox ended support in March 2017.
Can I still use it?
Well, this is another easy one. No, you can't Microsoft have dropped support and no Modern browsers support it either.
Modern Alternatives
The answer to this is basically native web platform API's. Specifics include:
- Video Streaming
- HTML5
<video>element with adaptive bitrate streaming (HLS, MPEG-DASH). - DRM is handled via Encrypted Media Extensions (EME).
- HTML5
- For interactive apps and dashboards:
- Modern JavaScript frameworks such as React, Angular, Vue, or Svelte.
- WebAssembly (Wasm) for near-native performance, including options like Blazor (from Microsoft) which lets you run .NET in the browser without plugins.
- For graphics, animation, and UI:
- CSS3 animations and transforms for UI transitions.
- Canvas API and WebGL for 2D and 3D graphics.
- SVG for scalable vector graphics.
- WebGPU (emerging) for modern GPU-accelerated rendering.
Silverlight Summary
Silverlight is legacy because it relied on a now-obsolete plugin model, had poor cross-platform support, and was outpaced by open web standards. Today, everything Silverlight did can be done more securely and portably with HTML5, CSS, JavaScript frameworks, and WebAssembly.
Java Applets
NOTE: I never used Java Applets (although I remember them!) I'm just adding it to the post for completeness.
What is Java Applets?
Java Applets were small applications written in Java that could be embedded into web pages and run inside the browser through a special Java Plug-in (based on the NPAPI plugin architecture). Introduced in the mid-1990s, they were part of Sun Microsystems’ vision of “write once, run anywhere” – letting developers build interactive content and complex functionality that browsers of the time (pre-HTML5) could not support natively.
They were often used for:
- Interactive educational content and simulations
- Online games
- Financial tools like mortgage calculators or trading dashboards
- Enterprise intranet applications
Why is it considered legacy?
- Plugin dependency: To use the Applets this required a user to install the Java Runtime Environment (JRE) plugin and keep it updated. I distinctly remember the update prompt for these nuisance updates!
- Security risks: The Java plugin was a frequent target of exploits and malware, leading browsers and enterprises to actively block or disable it.
- Performance and user experience: Applets often loaded slowly, had inconsistent UI integration with web pages, and required clunky permission dialogs.
- Decline of NPAPI support: Browsers started phasing out NPAPI (the plugin technology Applets relied on). Chrome dropped NPAPI in 2015, Firefox dropped NPAPI in 2017 (except Flash until 2021) , and Microsoft Edge never supported NPAPI.
- Official deprecation: Oracle deprecated the Java browser plugin in Java 9 (2017) and removed it entirely in later releases.
Can I still use it?
Nope! Modern browser no longer support it and Oracle stopped supporting the plugin in 2017.
Modern Alternatives
This list is basically native web platform API's. I don't want to repeat myself so refer to the Silverlight Modern Alternatives from earlier in the post.
Java Applet Summary
Java Applets are legacy because they relied on a fragile plugin model that posed significant security risks and is no longer supported by modern browsers. Today, HTML5, JavaScript, and WebAssembly provide richer, faster, and safer alternatives without requiring any plugins.
5. The JavaScript Framework Explosion
DHTML Beginnings (1997)
Dynamic HTML (DHTML) was all the rage around 1997-1998. By combining the primary web technologies: HTML, CSS, JS, and the DOM, developers realised that they could make a web page interactive and "dynamically" update the page without the need to reload the entire page. This new technique was frequently employed for animated HTML elements, such as image rollovers and dynamic navigation menus. It also provided immediate user feedback, particularly for form validation, by checking if the user had entered a valid email, for instance. If not available, then use standard HTML and CSS to show a user-friendly error message.
DHTML wasn't necessarily bad, if implemented in moderation, unfortunately it always seemed like the wild-west. What would work in Internet Explorer 4 (IE4) wouldn't always work in Netscape Navigator because Microsoft and Netscape Communications Corporation (NCC), had different levels of support for JavaScript in fact, Microsoft had their own version of the ECMAScript standard called JScript. This led to lots of maintenance headaches for developers as the solution often involving forking code for different browsers.
Secondly it turned into a bit of Copy / Paste madness. Since any developer could simply "View Source" on a web page and copy the code and add it to their own website, you often ended up with a mishmash of different interactions and animations all over a website! Thankfully, the trend eventually died out!
Can I still use it?
A web developer could technically still use DHTML on the modern web, but doing so would be strongly discouraged for any serious or production-level work. This is because in using it, a developer wouldn’t be following modern best practices like separating concerns with structured HTML, CSS and JS, building accessible and performant interfaces, using modular and maintainable code, or leveraging modern frameworks and tooling that enforce consistency, security and scalability.
Modern alternatives
There are several modern alternatives to DHTML, including:
- Using Frameworks like React, Vue, or Svelte
- Native browser APIs
- Modern CSS techniques
- Progressive enhancement and accessibility standards
Early frameworks and libraries
Prototype.js (2005)
Prototype.js version 1.0 was released in February 2005, and was initially developed to help simplify JS tasks in Ruby on Rails (RoR) projects. Its key features were a host of DOM manipulation utilities, AJAX abstraction to make XMLHttpRequest easier to handle cross-browser, Class-based inheritance in the form of a lightweight object-oriented programming (OOP) in JS. But by far its most influential feature was its shorthand DOM element selector $(), later popularised by jQuery.
I remember trying to learn and use Prototype a few times, but as a JS beginner, I found the name confusing. Especially since the prototype object sits at the heart of JavaScript, with almost the whole language hanging off it through things like property and method inheritance.
Can I still use it?
I mean, technically you could, but be warned it hasn't been updated in almost 10-years! Given the significant evolution of JS and the availability of modern, updated alternatives that leverage the latest browser JS APIs, it would not be a wise choice.
Modern Alternatives
It really depends on what a developer was using prototype.js for, as it had quite a range of functionality:
DOM manipulation
AJAX
Utility functions
Templating
Full replacement
Prototype.js was incredibly powerful. My preference would be to utilise several micro-JS libraries for specific functionalities rather than adopting an extensive framework such as React, but that's just my opinion, given the complexity of the React ecosystem.
Script.aculo.us (2005)
Script.aculo.us v1.0 was released in 2005 as an extension to Prototype.js. It built on Prototype.js by delivering a powerful set of visual effects, animations, and UI components. It also featured Drag-and-drop support out of the box, as well as sortable lists and autocompletion widgets. As with Prototype.js, Script.aculo.us popularity was partly because it was bundled with RoR, and it had widespread adoption within the "Rails" community.
I still remember Script.aculo.us for its distinctive URL, its bright, animated homepage, and how it really embodied the spirit of ‘web 2.0,’ making the web feel more alive. It's a library that left a lasting legacy, influencing later libraries like jQuery UI.
Can I still use it?
I really wouldn't recommend it, as it hasn't been updated in over 15 years! However, if you're curious to delve into some web 2.0 history, the site is still live (on HTTP, not HTTPS).
Modern Alternatives
Assuming you are only looking for a JS library for animations:
- GSAP (GreenSock Animation Platform)
- Motion One (Vanilla JS)
- Popmotion
- Anime.js
- CSS + Web Animations API (WAAPI)
Dojo Toolkit (2005)
Dojo Toolkit was one of the first major cross-browser toolkits, released in March 2005 with version 0.1. It was developed by Alex Russell and was also maintained by a community of other developers too. It's open-source and still available on GitHub It was one of the earliest frameworks to help build rich web applications by simplifying DOM manipulation, AJAX, event handling, animations and internationalisation (i18n), among many other cutting-edge features. Not only that, it was an early advocate for Asynchronous Module Definition (AMD) and modular JS. Furthermore, it was one of the first JS libraries with strong accessibility (a11y) support. In 2018 Dojo was rewritten as Dojo 2 which supports TypeScript, reactive patterns, virtual DOM, and modern build systems. Dojo 1.x is still in use in some long-running enterprise applications on the web. The last stable release was released in 2022, which was mainly for bug fixes and security updates. New feature development has now shifted purely to Dojo 2+.
Dojo 1.x was incredibly influential for later JS libraries like jQuery, MooTools, Prototype, especially when it came to governance. It was governed by the Dojo Foundation, which later merged with the jQuery Foundation to form the JS Foundation (now part of the OpenJS Foundation).
Can I still Use it?
Version 1.x, would be a bad idea, but you could technically still use the latest version (8.0.0), although that may not be wise either, given there hasn't been a new release in over 3-years. So it's most likely better to stick with more modern framework alternatives.
Modern Alternatives
There are a number of modern alternatives you could consider, keeping in mind that Dojo was very focussed on Accessibility (A11y) and Internationalisation (i18n):
- React: Maintained by: Meta (Facebook).
- A11y: Strong support, but it’s developer-driven. ARIA roles and keyboard navigation must be implemented explicitly by developers.
- i18n: Excellent ecosystem (
react-intl,formatjs,lingui, etc.)
- Vue.js (v3): Maintained by: Evan You and the Vue core team.
- A11y: Good defaults; still developer-led, but accessible components are emerging.
- i18n:
vue-i18nis well-maintained and powerful.
- Angular: Maintained by: Google
- A11y: Arguably the best among mainstream frameworks. The Angular Material team publishes a11y guidance, and many baked-in best practices exist.
- i18n: Built-in i18n support, including message extraction and compile-time translation.
- Svelte / SvelteKit: Maintained by: Rich Harris and the Svelte core team
- A11y: Improving, but not as mature as React or Angular. Accessible components need to be explicitly chosen or built.
- i18n: Community libraries exist (
svelte-i18n), but official support is not as comprehensive.
Yahoo! User Interface (YUI) (2006)
YUI was released in February 2006, its first release was version 2.0.0, this is because there was lots of internal development and usage within Yahoo! before it was released publicly. It was originally designed to standardise frontend development at Yahoo and provide the team with a solid cross-browser solution on which the Yahoo team could build feature-rich web applications. It contained a custom loader system that only loaded the components it needed, this was an ingenious approach in the pre-ECMAScript 6 (ES6) module era. Furthermore, it also came with a whole host of feature rich UI widgets, cross-browser abstraction, Event handling, DOM utilities, Animations, CSS Tools (which heavily influencing Normalize.css that was developed later), and it was one of the first libraries (after Dojo 1.x) to prioritise internationalisation (i18n) and accessibility, specifically Accessible Rich Internet Applications (ARIA).
What I remember most about YUI 2.x is just how huge it was! Not just the sheer number of UI modules, but the file size too: 300–350 KB minified, or 90–120 KB gzipped! This was before the widespread availability of fast broadband, and when hardware and browsers were significantly less optimised. A full build of the library could easily exceed those figures, too. This is why Yahoo also provided a combination aware CDN service to help reduce the number of requests made and bundle only the components needed. This was a practice that was way ahead of its time!
Can I still use it?
No, not really, it hasn't been updated since 2014, the reason for this is detailed in this Yahoo Engineering announcement from the time.
Modern alternatives
Given YUI's extensive nature as a framework, and to avoid repetition, I recommend referencing my notes on Prototype.js modern alternatives as an initial guide.
moo.fx (2005)
Moo.fx was a lightweight animation library designed to be unobtrusive and it also worked well with Prototype.js. It focused purely on DOM animations like height transitions and fading etc. It was part of a movement in JS to modularise the codebase for lighter and more responsive interactions. Furthermore, it was a distinct diversion away from larger and heavier animation libraries like Script.aculo.us.
I believe moo.fx was one of the first animation libraries I ever saw. Annoyingly the site hasn't been archived on archive.org. I do remember it having a simple and colourful homepage with examples of the animations you could add by only adding a tiny library to your page (3 KB).
It also had a fantastic URL too "http://moo.fx". What's so cool about the name, you may ask? Well, the .fx country code top-level domain (ccTLD) has now been rendered obsolete and is no longer available. It was originally reserved for Metropolitan France, but it was never officially delegated or made available for registration. France later adopted .fr as its official ccTLD. I can't fathom how Valerio Proietti, the creator of moo.fx, managed to register the name, but it's all true. As proof only a single record can be found on archive.org, dating back to 2007, it links to the site's robots.txt file.
Can I still use it?
I can't even find an archived copy of the homepage, let alone the library itself! So, it definitely falls into the "no, you can't still use it" category!
Modern Alternatives
Assuming we are only looking for a modern animation library, it's best to refer to the list I gave above for Script.aculo.us modern alternatives.
MooTools (2006)
The author of moo.fx, Valerio Proietti wanted more than an animation library. He aimed to develop a complete JS framework with an Object-Oriented Programming, modularity and extensibility as first-class features. Thus, MooTools was born! v1.0 was released in September 2006, and it featured some fantastic features for its lightweight size. These included a modular core, and separate components, you could also include, for extra functionality, an Advanced class system (predating the ES6 class functionality). Powerful DOM manipulation utilities, Ajax handling, effects (moo.fx), and custom events. It was both performant (for the time) and syntactically elegant. Development ceased in the mid-2010's with v1.5 (the final active release). moo.fx and MooTools hold a special place in my memory as one of the first JS libraries I learnt as a Junior developer.
Can I still use it?
Well, the website still exists here, But considering it hasn't been updated since January 2016, it's probably best to look for a modern alternative.
jQuery (core) (2006)
In 2006, a truly revolutionary JS library was released called jQuery. It was developed by the legendary John Resig. It offered a lightweight, chainable API in JS to simplify tasks like DOM traversal, and Ajax, among many other helpful tools and methods in its fantastically written API.
jQuery always prided itself on its easy-to-use API and its ability to abstract away the many cross-browser bugs related to different browser vendors implementation of JS (Mozilla) / JScript (Microsoft). It finally gave developers a "stable" API to start building JS powered websites without all the stress of the cross-browser hacks, and forking of code to make features work in every browser. With jQuery, it just worked!
I must admit, I absolutely adored jQuery (and still do)! The API was so clean and readable. The complete opposite to the DOM and the JS API! This library has saved more than just my code, it’s rescued entire projects and probably saved my sanity in the process! Especially working in Digital Marketing, as I did at the time. In those days, clients were constantly after the newest, flashiest animations, regardless of usability. It was all about chasing trends. And when the client's paying, you just nod and make that already bouncing button pulse and change colour!
Can I still use it?
Finally, I can say "Yes" to this question! Assuming you plan to use v3.x of jQuery, as the 1.x and 2.x branches are no longer supported or maintained. In fact version 4.0 is currently in beta, according to the Support page. Amazingly, almost a full 19-years after its first release, it is still under active maintenance! Not only that, according to the Web Almanac 2024, it's still the most popular JavaScript library in use on the web! Truly impressive work by the jQuery team and community!
Later era frameworks
Ext.js (2007)
Ext.js version 1.0 was released in April 2007. It was initially developed as an extension to the YUI Library before becoming a fully standalone framework. Its key features included a comprehensive suite of rich UI components, a powerful event model, and advanced layout management capabilities that far exceeded most contemporaries. It introduced a highly structured approach to building web applications, with a strong emphasis on reusable widgets and object-oriented design. But by far its most distinctive contribution was its fully integrated, desktop-like component model for the web. Something rarely seen at the time, and which set the tone for many enterprise-grade JS frameworks that followed.
I never used YUI personally, as its sheer size and breadth of functionality simply didn’t align with the kind of work I was doing at the time. As a result, Ext.js (which as mentioned above, was initially built upon YUI) wasn’t on my radar either. That being said, I’ve included it here for completeness, as it clearly played a significant role in the evolution of rich client-side application frameworks. During my research I discovered how Ext.js transformed into an enterprise-grade toolkit under the Sencha brand. Its strong emphasis on data-driven UIs distinguished it from other lightweight libraries of that period.
Can I still use it?
Yes, Ext.js is still a viable option that you can use on the modern web, if you are building a web application. It continues to be actively maintained by Sencha and even offers a React Extension, allowing for seamless integration of Ext.js components into React applications. However, be aware that Ext.js is now a paid library, with a per-year, per-developer licensing model that can be costly. While a free community version exists, it appears to have a very limited feature set.
jQuery UI (2007)
jQuery UI emerged in 2007 as an official companion library to jQuery, at a time when the JS ecosystem was fragmented and riddled with browser inconsistencies. It was developed to bring a unified, extensible suite of widgets, effects, and interactions to the web. jQuery UI offered an easy-to-integrate API that drastically lowered the barrier for implementing rich UI's, with full cross-browser compatibility. It played a crucial role in making dynamic front-end behaviour accessible to developers at all skill levels, becoming a staple in both enterprise and amateur hobbyist applications during the formative years of modern web development.
Although I was a big fan of jQuery and used it extensively across many projects, I never really had the opportunity to use jQuery UI in its entirety. When I did, it was typically for a single component, such as a date picker or drag-and-drop functionality. These components were reliable and well-supported, but required a lot of JS to function, and added complexity that I never felt was acceptable for a single feature. Especially when there were plenty of alternative micro-frameworks available, offering small, focused libraries that solved one problem well. I was far more inclined to take that modular approach than to include an entire suite of UI components unnecessarily.
One resource, I found invaluable at the time, was MicroJS. It hasn’t been updated in quite some time, but it remains a powerful illustration of how easy it is to cherry-pick the exact only the functionality you need, without burdening your page with hundreds of kilobytes of JS.
Can I still use it?
For this question, just as I did with jQuery, my answer is yes, you can still use it. It isn't updated very often, but it is still updated! The last release being in October 2024 with version 1.14.1
To put the enduring popularity of jQuery and jQuery UI into perspective, the 2024 Web Almanac reports that jQuery is still the most-used JS library on the web, appearing on 74% of pages in the dataset analysed. jQuery UI comes in fourth, with a 22% usage rate, Though described as mostly deprecated, it’s a clear reminder of how quickly modern tools become legacy software that must be maintained for decades. The latest release came nearly 18 years after jQuery UI’s first version. That’s an incredible achievement by the jQuery UI team, talk about dedication!
AngularJS (2010)
AngularJS and Angular are fundamentally different frameworks that share a name and lineage but are otherwise entirely different. AngularJS (1.x) was based on a Model-View-Controller (MVC) architecture with two-way data binding written in JS with support for ECMAScript 5 (ES5) and some ECMAScript (ES6) features. Angular (2+) was built entirely differently, it has a component-based architecture boasts stronger modularity, enabling both two-way binding and promoting unidirectional data flow. Another major difference is that Angular is written in TypeScript, a distinct superset of JS, that enables better tooling and type safety like Java and many other languages. Angular remains actively used and maintained on the modern web today.
I distinctly remember when Google released AngularJS because I was in Melbourne, Australia, working at a digital media agency. One of the Tech Directors there was raving about it, and how it was going to change Frontend development entirely. In hindsight, I agree with him, but I, personally, don’t believe it was a positive change. Single Page Apps (SPA’s) have had such a huge negative impact on Web Performance and Accessibility. Plus, a lot of the code in many of these SPA frameworks essentially reinvents functionality that’s already built into the browser and the Web Platform as a whole. Let's not overcomplicate things with framework-managed page state; we have a perfectly good back and forward buttons, thank you very much!
As you can probably tell, I’m not a big fan of SPA's. Admittedly, they have their place under some circumstances; However, I genuinely think they're excessive and unnecessarily complicate frontend web development for most applications. But I guess complex is the new simple, right? Anyway, rant over!
Can I still use it?
AngularJS, no. It is no longer actively maintained. Angular, absolutely, it is still a very popular framework on the modern web, although according to the State of JS 2024 survey, it has been overtaken by Vue.js for usage, for the 2nd year running. Frontend developers can be fickle, often chasing the next shiny framework like a kitten distracted by a dangling set of keys. It will be interesting to see if this decline in Angular usage continues in future JS surveys.
Backbone (2010)
Backbone.js was released in 2010 by Jeremy Ashkenas, who also created Underscore.js and CoffeeScript. As with AngularJS it was one of the first JS libraries that implemented the MVC architecture pattern to client-side JS. The key features of Backbone were its models, collections, views, router, events, and sync functionality that allowed it to easily communicate with RESTful API's.
I've only ever worked on a Backbone project once, and it was a rapid prototype website for a major airline based in Asia. Given the tight timelines and the client’s high design expectations, we ultimately opted for static HTML in the end, as Backbone’s complexity wasn’t advantageous for a rapid prototyping. In hindsight, had it got to the production stage, then I could see the architecture of backbone being very useful.
Can I still use it?
Backbone is nowhere near as popular as modern frameworks like React, Vue, or Angular, It is still mostly in use on legacy systems. The last version released was in April 1st 2025 v1.6.1, but looking through the releases, it seems to only get 1 update per year. According to Wappalyzer it is still in use by around 521,000 websites. The biggest of those being Atlassian. So in my opinion I'd say you should avoid using it and opt for a more popular framework with a more active community. Refer to the modern alternatives I listed for Dojo Toolkit Modern Alternatives as a starting point.
Knockout (2010)
During the early 2010s, Knockout.js was a popular JS library for building dynamic UIs using the Model-View-ViewModel (MVVM) pattern. It offered features like declarative bindings and two-way data synchronisation, which made it easier to keep the UI in sync with underlying data without manually manipulating the DOM. Its simplicity, ease of learning, and lack of required tooling, e.g. just drop in a <script> tag into an HTML page and go, this made it especially appealing at a time when frontend complexity was just beginning to accelerate. Though now largely superseded by modern frameworks, Knockout played a key role in the evolution of reactive web development.
I only used Knockout once for a client in Australia, while I lived there. It was a project that had been passed between offices because it had become a maintenance nightmare. Had it been a better planned build, it would have been a more pleasant experience, but time constraints meant a rapid delivery was prioritised over build quality.
Can I still use it?
The last release was way back in November 2019, so there's no way it should be used on a modern website build. Although, according to Wappalyzer it is still used by almost 43,000 websites. So you are very unlikely to come across it, even in legacy systems.
Modern Alternatives
If you are looking a modern MVVM framework then Vue.js, Svelte, or Aurelia are the way to go as the all fully support the MVVM architectural pattern. But if you aren't bothered about MVVM then refer to the list of modern alternatives I listed for Dojo Toolkit earlier in the post.
6. CSS Workarounds and Browser Quirks
Old CSS practices
Sliding Doors Technique
In the time before border-radius: existed in CSS, this was a legitimate way of “faking” the look of rounded borders. The technique's name reflects its implementation and purpose. (nothing to do with the 1998 film starring Gwyneth Paltrow about getting cheated on!). It was actually a rather ingenious method of creating rounded corners that expanded and contracted both in width and height of the bounding box containing the content. It was also incredibly useful for making your site navigation more interesting. If you are interested in reading all about this technique, check out "Sliding Doors of CSS" and "Sliding Doors of CSS, Part II" on alistapart.com both from October 2003!
If you are old like me, you can just jump back into full nostalgia mode by visiting the final code example here!
Can I still use it?
There's absolutely no need to use it on the modern web, simply use the border-radius CSS property, it comes with all the positives of the sliding door's technique (rounded corners), and none of the negatives (higher maintenance, additional downloads, non-semantic wrapper DIVS used only for presentation). And given its browser support on the modern web, you can go rounded corner crazy!
Image Sprites for Icons
The history of icon sprites is actually quite a fascinating one (for me at least!), since it is essentially rooted in web performance optimisation. The HTTP/1.1 specification (RFC 2616) published in June 1999, specifically says in section 8.1.4 that:
Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy.
HTTP/1.1 introduced persistent connections, a departure from the one-off connections of HTTP/1.0 (as detailed in section 8.1 of the HTTP/1.1 specification). This innovation understandably raised concerns within the Network Working Group regarding potential increases in server load across the web.
The web was becoming more and more popular every day around that time, with thousands of new websites being launched daily. Remember the Dot-com Bubble that burst in March 2000, it gives you some indication of just how popular the internet had become across the world!
You're probably thinking "what has any of this got to do with image sprites?", well, quite a lot, actually. With HTTP/1.1, every asset on a page meant opening a new TCP connection. So if your site had hundreds of icons, the browser would literally open hundreds of separate connections just to fetch them all (assuming it were allowed to)! And this is why browser connection limits were introduced into browsers. Different browsers chose different limits, but it was usually around 4 to 6 per domain. So in order to reduce the number of TCP connections to download all these image assets, a technique called Image Sprites was invented.
Image Sprites, involved compiling all your small image assets down into a single bigger background image file, then using CSS positioning (and width / height) to essentially mask the image and only show the single image you wanted to show in any particular place. It then only required a single TCP connection to download all the small images (one large image with all the icons), and you'd use some CSS magic to just reposition the background image. This technique was mostly used for image icons, and the CSS for this method would look something like this:
.icon {
background-image: url('sprite.png');
background-position: -20px -40px;
width: 16px;
height: 16px;
}Here we have an icon that is 16px by 16px and the icon we want to show is positioned at -20px -40px on the background image. Genius! A working example of image sprites can be found here on CodePen.
Can I still use it?
Again, no need. Image sprites fell out of favour as a technique when more modern techniques like SVG sprites and webfont icons came along.
The technique was also outdated by new web performance technologies like:
But those are topics for another blog post, which I just so happen to have written a little about, so check them out (shameless self-promotion, sorry!)
Vendor Prefixes for CSS
Now I'm pretty sure some readers will be thinking, wait "vendor prefixes for CSS are old best practice??". They really aren't as they are a vital technique to stop new CSS additions from "breaking the web". By slapping on a vendor prefix to a CSS property allows browser vendors to experiment with new CSS features in isolation and then once stabilised in the CSS specifications, and supported by the major browsers, the prefix can be removed in favour of the non-prefixed version. For example, -moz-border-radius changes to border-radius. So just to clarify vendor prefixes aren't old best practice, but the way they are created within website code is. In the distant past you used to have to manually write out each of the prefixed versions to ensure maximum compatibility with all browsers. Can you imagine writing this out manually for all new CSS features you want to use! Example:
.mybox {
-webkit-border-radius: 10px; /* Chrome, Safari, iOS Safari (early versions) */
-moz-border-radius: 10px; /* Firefox (pre-version 4) */
-ms-border-radius: 10px; /* Not officially supported, but some old syntax examples include it */
-o-border-radius: 10px; /* Old Opera (Presto engine, now obsolete) */
border-radius: 10px; /* Standard syntax */
}What a waste of bytes and time! (I know compression pretty much fixes the bytes issue!) With the popularity of tools like Compass (a Sass Framework), LESS mixins, and Bourbon (a Sass mixin library), or even Stylus with the nib plugin, the issue started to be abstracted away from developers.
Can I still use them?
On the modern web there's simply no need to manually add prefixes. The best practice is to use PostCSS with Autoprefixer. Or if you’re using bundlers like webpack, Vite, or Parcel, they all offer out-of-the-box support for Autoprefixer. Rollup also supports Autoprefixer but requires a plugin. With the use of these tools along with Browserslist frontend developers don't even have to think about prefixes any more (once properly configured). An automation win for all developers!
Heavy Use of !important in CSS
In the early days of CSS the use of !important was a common practice for the enforcement of style overrides. It was often considered a "quick fix" for specificity issues, and used to fix visual inconsistencies in cross-browser scenarios or when using third-party CSS code. However, the overuse of !important is now widely considered a legacy practice that causes major issues with scalability, testability, and maintenance.
Can I still use it?
Yes, It can still be used in certain circumstances e.g. quick debugging, third-party overrides. But a rule of thumb I'd recommend is, if you are using !important for anything outside a small set of circumstances, you likely have bigger problems that need to be addressed first. As having predictable specificity in CSS is critical for the long term "health" of any web UI. Its use is a clear red flag that the project may be difficult to work on, as it’s a distinct frontend code smell.
That being said, If you ever do need to use it, you should clearly document the reason(s) why in the code, so future developers clearly understand why. Without clear documentation on why it is being used, a future developer is likely to remove this legacy CSS practice due to its reputation, potentially disrupting page styling elsewhere on the website.
OldIE hacks
For anyone lucky enough to have missed the joy of building websites for IE 5, 5.5, and 6 (the browser trilogy nobody asked for, released between 1999 and 2001), rest assured, that statement is absolutely, definitely not sarcastic at all. Promise! These versions of IE were just terrible browsers to work with. Microsoft in their infinite wisdom were following their own interpretation of the browser standards that were still in their infancy, and this often creating proprietary features that no other browser vendor followed or implemented.
Bundled with Windows XP, the popular IE6 unfortunately held back web progress for several years. Its standalone release for Windows 98, Windows ME, Windows NT 4.0, and Windows 2000 further cemented its widespread adoption and prolonged influence across the web. This forced Microsoft to innovate in the browser market, or face obsolescence, when a viable and revolutionary competitor finally emerged to finally move the web forwards! Initially launched in 2002 as Phoenix 0.1, a name symbolising its rise from the ashes of Netscape Navigator 9.0.0.6, this browser was renamed Firebird in early 2003, and later that same year became Firefox, the name it retains today.
And as an avid Firefox user for many years, I'm still delighted it is still around as it has its own HTML engine called Gecko. As the modern browser market is very WebKit / Blink dominated. Competition in this area is always a good thing to help drive innovation, and move the web forwards!
So, enough of the dusty old history books, what was so bad about IE6? Oh, dear, where do I even begin? It was less "web browser" and more "digital dumpster fire" with a whole host of issues and annoyances listed below. As for the "Can I still use it?" section, trust me you wouldn't want too! oldIE is thankfully about as common on the modern web as seeing a third-party script that improves web performance! Let us have a look at many oldIE issues in all their glory:
DOCTYPE fragility
If there was a single character before the DOCTYPE in IE6 it would trigger quirks mode (see the Quirks Mode Layouts section earlier in the post for a more detailed explanation). This even included newline, comment, space, or invisible byte characters. As you can imagine this was a nightmare for debugging. And if you weren't in full control of your HTML coming from the server, you were in for a bad time! Quirks mode essentially set the browser into the legacy rendering mode (mostly IE5 behaviour). If that wasn't bad enough, it would also interpret the Box model differently, causing chaos for layouts! Lastly CSS and layout behaviour would be inconsistent or in some cases, totally broken!
zoom: 1 hack
The zoom: 1 hack forced an element into the "hasLayout" rendering mode, which was a proprietary internal rendering concept in IE6 and IE7. For any element that didn't “have layout”, it could:
- Collapse when floated
- Break rendering of child elements
- Overflow improperly
- Fail to clear floats
- Interrupt margin collapse
I distinctly remember using zoom: 1 a lot with float layouts, which was the main option for CSS layouts at the time. As mentioned above the element would then clear floating elements and also overflow elements correctly.
Underscore Hack
This was a pretty simple hack to target only IE6 and IE7 in quirks mode, any property prefixed by an underscore is ignored by modern browsers, but recognised by IE6 and IE7.
.selector {
_width: 500px /* targets IE6 / IE7qm */
width: 500px /* modern browsers */
}Asterisk Hack
Another simple hack to target IE6 and IE7 that is ignored by modern browsers. It was especially useful for fixing box model issues.
.selector {
*width: 100px /* targets IE6 & IE7 */
width: 100px /* modern browsers */
}Star HTML Hack
A slightly different hack that was added to the start of the selector rather than a property within a selector. It would only target IE6 in standards mode (not quirks mode).
html .selector { /* all browsers including IE6 */
margin-left: 50px;
}
* html .selector { /* IE6 only - order is important */
margin-left: 0;
}Note: The order of the above code is important since IE6 could read both selectors but the second one "wins" due to its position in the cascade.
Child Selector hack
Another straightforward hack to target ONLY modern browsers (not IE6) was:
ul > li { /* not understood by IE6 & IE7 */
color: rebeccapurple; /* "rebeccapurple" not understood by IE6 & IE7 */
}IE6 & IE7 didn't understand the child selector, so it simply ignored it.
Note: the colour is also historically significant in the web community as the rebeccapurple CSS color keyword was added as a tribute to web pioneer Eric Meyer's daughter Rebecca who passed away of brain cancer at the age of six. Eric, if I’m ever lucky enough for you to see this post, please know that as a brain cancer survivor myself who has personally seen the impact this terrible disease has on friends and family, my heart truly goes out to you and your family. I am so sorry for what you’ve all been through. Rest in peace, Rebecca.
Double Margin Float Bug
This is a pretty aptly named CSS bug fix, as it does exactly what it says on the tin! Due to IE6's crazy box model interpretation it sometimes liked to double the margin on floated elements.
.floated-element {
float:left;
display: inline; /* fix the double margin bug in IE6 */
}Peekaboo bug fix
IE6 sometimes threw its toys out of the pram when content changed on the page, in these cases it would either render them incorrectly or just make them disappear, hence the name peekaboo!
To fix it, you had to apply "hasLayout" to the element:
.buggy-element {
zoom: 1; /* trigger hasLayout on the element to fix the peekaboo bug */
}Transparent PNG fix
IE6 was unable to display the alpha channel (transparency) in PNG files, instead it just interpreted it as a grey, solid background. Combined with rounded corners on a non-solid background colour background, this was a real pain in the backside! Thankfully, IE6 supported CSS filters, allowing Microsoft to offer its proprietary AlphaImageLoader for PNG transparency. There were 2 methods to apply this filter:
.transparent-png {
behavior: url("iepngfix.htc");
}The .htc file contained JS logic that dynamically applied Microsoft’s proprietary filter to the PNG elements.
An alternative for background images was this:
.transparent-bg {
background: none !important;
filter: progid:DXImageTransform.Microsoft.AlphaImageLoader(
src='image.png',
sizingMethod='crop'
);
}Yep, how horrible is that! 🤮
But even if you solved the transparency issue your problems weren't over as its usage came with a few caveats:
- AlphaImageLoader broke CSS
background-positionandbackground-repeat. - The fix didn't work on
<img>tags unless additional.htcbehaviour hacks were applied! .htcfiles could be blocked fairly easily by companies with restrictive internet policies, so it wasn't always guaranteed to work. And if this happened to be one of your clients... well, there goes the whole design!- Lastly these hacks only worked in IE5.5 and IE6, thankfully IE7 supported transparent PNG's so these hacks had to be targeted at IE5.5 and IE6 only.
Lack of IE Developer Tools
I know the bugs above sound frustrating, but one of the biggest issues was the fact that IE6 and IE7 completely lacked any "sane" developer tools to help resolve these issues. Initially, developers had to rely only on alert() or document.write() for debugging! This was where the infamous [object Object] or [object] [object] came from when debugging JS in IE6 and IE7. If you had a JS error that you wanted to investigate, that was just about all the information you were given from alert(). Thankfully, Firebug Lite was released later by the Firebug Working Group. It wasn't developed by Mozilla and it wasn't a browser extension. It was a JS file that you could include in your page or as a bookmarklet to mimic the Firebug debugging tools. It was only later in IE8 that Microsoft included the first native developer tool in IE. You enabled it by pressing F12, it finally allowed developers to use the console functionality that we still use today. Interestingly, F12 to open the browser DevTools still remains the standard across most modern browsers!
IE Conditional Comments
Conditional comments were introduced into Internet Explorer in IE5, released in March 1999. They were introduced to Internet Explorer by Microsoft because IE had its own interpretation of HTML, CSS, and JS that diverged from W3C standards. This made it extremely difficult for Frontend developers to build consistent UI's that worked across all browsers, without resorting to complex and brittle CSS hacks as seen earlier in the post. Conditional comments allowed developers to have the best of both worlds, a fully W3C-compliant CSS file for modern browsers, and a CSS file that only applied to certain versions of Internet Explorer (that contained all the version-specific hacks!).
It's worth noting that each version of IE had its own specific hacks, so you'd often see multiple Conditional Comments in the <head> of a website after the W3C-compliant stylesheet (due to CSS loaded after the modern version taking precedence due to the cascade). The hacks were still there, but they were sectioned off into their own CSS file(s), ready to be removed when the browser was no longer used by the vast majority of site users. There were countless different ways to write conditional comments due to the version logic built into the interpreter. Note that modern browsers ignored them entirely as they were seen as standard HTML comments e.g. (<!-- and -->).
Some example conditional comments can be found below:
<!-- Target only Internet Explorer 6 -->
<!--[if IE 6]>
<link rel="stylesheet" href="styles-ie6.css">
<![endif]-->
<!-- Target IE 7 and lower -->
<!--[if lte IE 7]>
<link rel="stylesheet" href="styles-ie7-and-below.css">
<![endif]-->
<!-- Target IE 8 only -->
<!--[if IE 8]>
<link rel="stylesheet" href="styles-ie8.css">
<![endif]-->
<!-- Target IE 9 and above -->
<!--[if gte IE 9]>
<link rel="stylesheet" href="styles-ie9-and-up.css">
<![endif]-->
<!-- Target any version of IE -->
<!--[if IE]>
<script src="polyfills-for-ie.js"></script>
<![endif]-->
<!-- Exclude all versions of IE (i.e. target modern browsers only) -->
<!--[if !IE]> -->
<script src="modern-browser-script.js"></script>
<!-- <![endif]-->
<!-- Combine conditions: Target IE 6 to 8 -->
<!--[if (gte IE 6)&(lte IE 8)]>
<link rel="stylesheet" href="legacy-ie6-to-ie8.css">
<![endif]-->Explanation of the Condition Syntax:
IEmatches any version of Internet Explorer.IE 6,IE 7,IE 8, etc. match specific versions.- lte = less than or equal to
- gte = greater than or equal to
- !IE = not Internet Explorer
- Note the use of the
<!-->and<![endif]-->to properly close in non-IE browsers. This was known as a downlevel-revealed comment.
- Note the use of the
As you can see from the complexity of the above examples it was fairly simple to target very specific and multiple versions of Internet Explorer. A significant drawback was the increased maintenance burden and the clutter they introduced within the sites' <head> tag.
Thankfully, Microsoft removed the parsing of conditional comments in IE10 and IE11 before they eventually introduced a whole new browser called Microsoft Edge. Edge initially used a proprietary rendering engine called EdgeHTML. However, the browser was subsequently rewritten to incorporate the same open-source engine as Google Chrome. This new version, based on Chromium 79, was released as Microsoft Edge 79 on January 15, 2020.
IE CSS Selector Limit
This issue, of all those detailed in this section, is arguably one of the most random, as well as one of the least noticeable! IE6 to IE9 had a selector limit of 4,095 selectors per stylesheet, now that may sound like a lot, but it was very straightforward to go over this limit, especially when grouping selectors. For Example:
/* This counts as a single selector */
.my-selector {
margin: 20px;
}That's all straightforward, but then you look at something like this:
/* This block counts as three selectors */
.button-primary, .button-secondary, .button-tertiary {
margin: 20px;
}Once you started grouping selectors for easier maintenance it became far too easy to hit the limit, especially on large websites. If you were a user of Bootstrap or Foundation at the time you could hit this limit unintentionally, without even knowing it!
And that brings me onto my next point: What happened when that limit was reached? Well... nothing really, IE just didn't parse any CSS beyond the 4,095 selector limit. Would it warn you that this was happening? Absolutely not!
Developers were often extremely fortunate if this issue was discovered through testing pages styled later in the stylesheet. Internet Explorer itself, however, would simply fail without any error messages or warnings.
And, to make it even more confusing: it would only impact the specific stylesheet that had exceeded the limit, not the page as a whole. For example:
<link rel="stylesheet" href="base.css"> <!-- 2000 selectors -->
<link rel="stylesheet" href="theme.css"> <!-- 4500 selectors -->
<link rel="stylesheet" href="overrides.css"> <!-- 300 selectors -->In IE6 - IE9 this is what would happen:
base.cssloads perfectly fine, it is under the limit.theme.cssonly the first 4095 selectors are parsed, the rest are silently ignored.overrides.csswould be load fully since it is under the limit.
This behaviour creates a partial styling issue. Elements relying on the theme.css file won't be styled correctly beyond the 4095 selector limit. In such cases, most pages will appear normal until a page attempts to utilise style selectors 4096 through 4500 from the theme file, at which point it will fail without warning. And of course if you were unlucky enough to be working with IE6 or IE7, then you had no developer tools to even debug the issue either!
Solution
So what was the solution? Well, with the invention preprocessors like Sass or tooling like Grunt, Gulp, or PostCSS they could automate the splitting of stylesheets at the 4065 limit.
Or another solution was to supply a simplified UI to IE browsers only and serve those CSS files via IE's Conditional Comments. But can you imagine the maintenance involved in updating multiple different stylesheets? Just for the slightest UI change!
The final approach involved reducing reliance on external stylesheets by inlining critical CSS directly into the <head> of the page, specifically for above-the-fold content (we’ll come back to that strategy later, as it’s not as relevant today). Even thinking about these different maintenance options, and their implications gives me a headache!
OldIE hacks Summary
As you can imagine CSS files around this time were a bit lot of a mess with all these random cross browser hacks and workarounds! Thankfully Microsoft recognised it was an issue so decided to implement conditional comments in IE5-IE9 to make this madness a little easier (in terms of organisation, not coding)
7. Markup of the Past
XHTML 1.1 and 2.0
I remember having a conversation with a friend about how he was converting his website into a new standard that had just come out, this was around 2001 and the new standard was XHTML 1.1. The most obvious difference at the time was the DOCTYPE at the top of the page source. From HTML 4.01 Strict:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">to:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">What did XHTML 1.1 aim to achieve?
The goal of this new standard was to modularise XHTML and enforce stricter XML compliance. Its key features were the fact that it was based on XHTML 1.0 Strict but split into modules for better reusability and extensibility. It also required documents to be well-formed XML, and it enforced stricter syntax than HTML like all tags must close, and all attributes must be quoted.
Unfortunately, for XHTML 1.1 it came with a number of limitations that doomed the specification from the start. These included, very little browser support for serving XHTML 1.1 as application/xhtml+xml, it also, more critically broke backwards compatibility in many real-world use cases. And lastly many developers continued to write XHTML but serve it as text/html which entirely defeated the point of writing XHTML in the first place!
Because it never gained wide browser support and required a very strict syntax, it eventually became obsolete and is now mostly of interest for historical or academic reasons.
What did XHTML 2.0 aim to achieve?
XHTML 2.0 was never officially released or used in any production browsers. Had it done so, this would have been the DOCTYPE:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 2.0//EN"
"http://www.w3.org/MarkUp/DTD/xhtml2.dtd">Development began on the specification in the early 2000s with the key goals its key goals being:
- a clean break from HTML 4 and XHTML 1.x.
- planned to introduce entirely new ideas like replacing
<a>with<h>for links. - high up on its priority list was for it to be device-agnostic and semantically pure.
The XHTML 2.0 specification ultimately failed for the following reasons:
- It lacked practical browser support and Implementation.
- HTML5 (developed by WHATWG) gained real traction by improving the existing HTML and maintaining compatibility.
- And the gigantic final nail in the XHTML 2.0 specification was its complete lack of backwards compatibility. It broke the entire ecosystem of existing web pages and tools.
- The W3C officially halted XHTML 2.0 in July 2009 and shifted efforts to HTML5.
This then brings us onto the modern standard still in use today, and finally a DOCTYPE that was easy to remember! Look how developer-friendly it is:
<!DOCTYPE html>It is so simple in fact it is case-insensitive and doesn't require a cumbersome URI to the Document Type Definition (DTD), which was there to define the structure, rules, and legal elements and attributes used on a page.
Inline JavaScript
There are 2 distinct methods of using inline JS in an HTML document. They both have specifications in the HTML Standard (WHATWG).
Inline Script Block
The inline block is defined in section 4.12.1 The syntax is simple and familiar. An example usage is as follows:
<!DOCTYPE html>
<html lang="en" class="no-js">
<head>
<meta charset="utf-8">
<title>Title here</title>
<script>
document.documentElement.className = document.documentElement.className.replace('no-js', 'js');
</script>
</head>
<body>
<!-- The script can also be placed in the body too -->
</body>
</html>It's perfectly fine to use the inline script block in this way, and it in no way a legacy / outdated technique. But it does come with a few things worth considering before using it. Inline script blocks like this cannot be run async or defer, these attributes only apply when loading external scripts using the src attribute.
Can I still use it?
Yes, you can, but it comes with a few caveats. A script in this position in the <head> executes immediately and synchronously, and it will block page rendering until the JS code completes. So make sure you don't overload an inline script block as your website will pay the price in terms of frontend web performance.
Inline Event Handler or HTML Event Attribute
Inline event handlers are defined in section 3.2.6 and section 8.1 of the HTML Standard (WHATWG) Inline event handlers are now considered a legacy pattern in modern web development. This is due to several reasons, including the security risk posed by JS execution in the global scope, the potential for Cross-Site Scripting (XSS) vulnerabilities, and the way they clutter the code. An example of an inline event handler is below:
<!-- other page code here -->
<ul>
<li><a onclick="alert('About clicked'); return false;" href="/about">About Us</a></li>
</ul>
<!-- other page code here -->The simple example above captures the onclick event from the anchor, and instead of taking you to the about page which you would expect it simply brings up an alert box with the "About clicked" string.
There are other reasons why this technique is considered legacy too since they conflict with the principle of separation of concerns, and also make debugging, testing, and the implementation of accessibility best practices harder. Lastly, Content Security Policy (CSP) can also disallow inline event handlers unless explicitly allowed (another big security risk!).
Can I still use it?
No! Don't use this outdated technique to add JS interactivity to your page. Instead you should move towards external scripts and unobtrusive JavaScript. An example of which is given below:
<!-- other page code here -->
<ul>
<li><a class="myclass" href="/about">About Us</a></li>
</ul>
<!-- other page code here -->// assuming the element.myclass is already in the DOM
const el = document.querySelector(".myclass");
el.addEventListener('click', () => {
// Do JS stuff here!
});This assumes that the myclass element is already in the DOM. If it isn't document.querySelector will return null and result in a TypeError. The safest way to get around this would be to use the DOMContentLoaded event. An example of this given below:
document.addEventListener("DOMContentLoaded", () => {
const el = document.querySelector(".myclass");
if (el) {
el.addEventListener("click", () => {
// Do JS stuff here!
});
}
});If you're thinking that's a lot of code just to add a click event to a single element! Then you'd be correct, hence why libraries like jQuery were so incredibly popular for event additions and basic DOM manipulation.
Document.write()
I must admit, I don't think I've every actually used Document.write() on a webpage, that's probably because I've never seen a sane reason to use it! It's a JS method provided by the browser’s DOM, that allows you to write HTML or text directly to the page. A simple example is given below:
<script>
document.write('<h1>Hello, world!</h1>');
</script>Wherever this code is placed in the page, it will simply output a h1 with the content of Hello, world! Now there's a reason I was so harsh on the method above, and that's because it comes with some horrible side effects. These include the following:
- As with any Inline Script Block, it blocks page rendering until the content is written to the page.
- It runs synchronously and can block scripts and other resources from loading efficiently.
- Lastly, and this has to be the best (and most horrifying feature!). If used after the page has fully loaded, it can erase the entire DOM and replace it with whatever was passed to the method. This would be
<h1>Hello, world!</h1>in the example given above. - It's important to consider the security implications of its usage as it is similar to
eval()(MDN link in some ways (but not all). Both methods can enable cross-site scripting (XSS) if user input is injected without sanitisation.
When should it be used?
On the modern web the simple answer is never! It's best seen as a historical curiosity that some legacy systems may still use that haven't been modernised yet. It could also be used in elementary educational examples or demos. There are a number of much safer (and robust) modern alternatives that should be used instead. These include:
element.innerHTML(MDN Link).element.textContent(MDN Link).document.createElement()(MDN link) in conjunction withappendChildandinsertBefore.- Modern frameworks or libraries for manipulating the DOM and updating the UI.
Fixed Viewport Meta Tags
Fixed viewport meta tags were used in early mobile responsive development. An example of what it looks like is below:
<head>
<title>My Fixed width Mobile Site</title>
<meta name="viewport" content="width=320, user-scalable=no">
</head>In the example above the viewport meta tag is telling the browser that:
- This website should be rendered at a fixed width of 320px
- It should disable user scaling, so the layout is locked to a specific dimension, regardless of the actual screen size.
Why use this approach?
In the early days of mobile development, desktop websites were very difficult to view and interact with on small mobile screens. In order to "fix" this issue, developers often built mobile specific websites, that would sit alongside the desktop website. 320px was a popular width at the time because the first iPhone (iPhone 3G) had a 320px size screen. In order to maintain maximum control over the layout appearance on these mobile devices, developers frequently prevented users from zooming into these sites. These restrictions also helped avoid layout shifts when loading in dynamic viewport sizes e.g. device orientation (portrait vs landscape), zoom or pitch gestures, changes in browser UI elements (address bar or toolbars).
Why was this bad?
There were a number of reasons as to why this technique was bad. The number one being it was terrible for accessibility. A user with a visual impairment, on a mobile site that disabled pinch-to-zoom (user-scalable=no), had no way to read the site's content. Secondly, by dictating a screen width, you are harming both adaptability and making assumptions about a user's device. Devices come in all shapes, sizes, and pixel densities. Mobile, tablet, desktop, and every resolution in-between. There are literally an infinite number of screen dimensions possible, obviously many of those would be impractical for users beyond a certain range, but it's impossible to maintain all these fixed versions so this technique quickly become outdated. Lastly, fixed sizes can lead to performance issues, as they may cause unnecessary UI reflows and repaints when used with older layout methods such as tables or fixed-position elements.
Modern Best Practice
With the development of responsive design practices by Ethan Marcotte in 2010 (Responsive Web Design: A List Apart), fixed layouts quickly fell out of fashion, as with responsive design, you could develop a single UI that worked on all devices, no matter what their screen size or pixel density. One UI to rule them all! It comes with the huge advantages of much less maintenance for developers, and more importantly a huge usability for all users, no matter what device they are viewing a website on. The recommended viewport meta tag for modern websites is this:
<meta name="viewport" content="width=device-width, initial-scale=1">This tells a user's browser:
- to match the screen’s actual width (
width=device-width). - to set the base zoom level (
initial-scale=1). - to allow users to zoom the viewport (
user-scalable=yesthis is default if not set)
You may come across fixed layouts in legacy applications, and if you do, you should seriously consider:
- Replacing fixed tags with scalable ones.
- Refactoring CSS layout logic to use flexible grids, fluid typography, and media queries.
- Ensuring accessibility standards are upheld, especially zooming support (up to 200% as specified in WCAG Success Criterion 1.4.4).
Web Safe Fonts Only (before @font-face)
Fonts are arguably the most crucial component of the web. Without them, there would be no content, and consequently, no internet. This fundamental importance explains the extensive nature of the CSS Fonts Module Level 4 documentation. Fonts present a challenge due to their vast variety and subjective nature. What one person finds legible, another may not.
Web-safe fonts are typefaces that are broadly supported and consistently rendered across most web browsers and operating systems, eliminating the need for users to install additional fonts. They are distinguished by three primary characteristics:
- They come pre-installed on most devices across all operating systems (Windows, macOS, Linux, iOS, and Android).
- They render consistently across browsers, devices, and operating systems.
- If a font isn't available on a certain device, there's a viable alternative that can be used by default, this is called "fallback safety".
A common fallback font family using the font-family CSS property is displayed below:
.class {
font-family: Arial, Helvetica Neue, Helvetica, sans-serif;
}According to CSS Font Stack this combination of fonts is supported by 99.84% of devices on Windows, and 98.74% on Mac. Notice how it gives the browser a list of fonts, the primary choice being Arial, and if Arial isn't available then Helvetica Neue will be used, all the way down to sans-serif. This is basically saying "if none of the preceding fonts are available, then choose any sans-serif font on the device". This guarantees that a font will always be available on any device, so even though different fonts will be used depending on the operating system, the page content will still be rendered and readable for all users.
Common Web Safe Fonts
- Arial
- Times New Roman
- Verdana
- Georgia
- Courier New
- Trebuchet MS
- Lucida Console
The issue with these fonts is that they are very limiting, especially for the Design community. Designers have very strong opinions on fonts, it is their "bread and butter", after all, so that's to be expected! For years, both developers and designers have strived to integrate all fonts, not just web-safe ones, into web development. In doing so many people on the web came up with different ways to allow them to use non-web safe fonts. These include the methods I mentioned earlier in the post:
As mentioned earlier all of these methods worked, but they had limitations, be that with Accessibility, Performance, Maintenance, Security, or SEO.
In order to mitigate thes limitations, a modern, standardised method was required for browsers to load custom fonts.
Enter @font-face
@font-face is a CSS rule that allows web developers to load custom fonts on a webpage. Unlike the methods listed above it's a native browser feature that brings typographic control to the web, while also preserving Accessibility, SEO, Maintenance, Security, and Performance (if implemented correctly).
The @font-face rule has a notable history, as it was initially implemented by Microsoft in Internet Explorer 4 in 1997. At the time it used Embedded OpenType (EOT) fonts. This was a proprietary solution by Microsoft and not a part of the CSS standard at the time, so adoption outside of Microsoft browsers was non-existent. It wasn't until the W3C developed and standardised the CSS Fonts Module Level 3, that browser support across different vendors started to improve.
Although CSS Fonts Module Level 3 work began in the early 2000s, true standardisation took time as browser vendors adopted open formats like TTF, OTF, and later WOFF and WOFF2. The CSS Fonts Module Level 3 was not released as a W3C recommendation until September 2018. The first CSS Fonts Module Level 3 working draft was published in July 2001.
Usage
So how do we actually use @font-face? Well, it's pretty straightforward:
Declaring the font
@font-face {
font-family: 'MyFont';
src: url('/fonts/myfont.woff2') format('woff2'),
url('/fonts/myfont.woff') format('woff');
/* other font formats here */
font-weight: normal;
font-style: normal;
}Although you can define other font formats, this is no longer recommended, since the combination of WOFF and WOFF2 will cover all popular browsers. In fact, depending on your user analytics data, you may even be able to drop to only listing WOFF2, since it is now supported by 96.2% of browsers used on the internet according to Can I Use.
Using the Font
body {
font-family: 'MyFont', sans-serif;
}Here is where we apply the custom font to the page elements using standard CSS selectors.
IMPORTANT: note how sans-serif has also been set as a fallback e.g. later in the font list. This is best practice because we are loading an external font file to render the text on the page. If this font no longer exists on the server (or simply fails to load), users will be left with no text, since the external font is missing. This ensures that even if the custom font isn't available it will "fallback" to an appropriate web safe font.
Now there are a number of web performance points that should be considered when using web fonts, but I won't go into them here. Instead, I will point you towards Zach Leat's excellent "The Five Whys of Web Font Loading Performance" article from November 2018, and it also links to his Performance.now() conference talk on the same subject. Well worth a watch if you have a spare 46 minutes!
8. Tools and Workflow Relics
SVN (subversion, largely replaced by Git)
SVN (Subversion) is a centralised version control system that was widely used in the 2000s and early 2010s. It was the first versioning system I used at one of the digital agencies I worked at in the late 2000s. The memories that stick with me most about SVN are:
- Every folder and sub-folder had an annoying
.svndirectory within them. This directory contains all the metadata needed by SVN to manage the versioned files. - Branching and merging in SVN was a painful experience!
Although in all honesty, I haven't used it in over a decade, so both these points may have changed and now be invalid? Actually, I doubt it for backwards compatibility with older SVN repositories.
The key word in the top paragraph above is "centralised". In a version control context that means that with SVN there's a single central repository that all version history and file management operations are built around.
In comparison, Git / GitHub are decentralised repositories. When you clone a repository onto your local machine you have the whole history of all the files, you can modify them while offline then synchronise with other developers code modifications once you're back online.
Legacy Development Practices
If SVN is being used in 2025, it could imply certain things about the codebase and teams working practices. These include:
- The tooling being used is likely to be old (e.g. Eclipse plugins, shell scripts).
- Continuous Integration (CI) / Continuous Development (CD) is likely to be very basic or missing entirely.
- Due to the complexity of the branching and merging process in SVN, this type of workflow will likely be minimal if used at all!
Team Cultural Indicators
There are also red flags in terms of engineering culture if SVN is still being used. It typically indicates that:
- The engineering team has a conservative engineering culture.
- The team have a risk-averse attitude to change.
- The team may have a backlog of technical debt that has accumulated over many years.
- Recruitment of developers wanting to use SVN is likely to be challenging, as recent surveys indicate that SVN has 5.18% of the Version Control System (VCS) market share. It is second to Git that dominates with a 93.87% market share. This is also likely to impact retention of developers too, since Git / GitHub are the dominant tools in use in most industries (although, not all) in 2025.
What to look out for
Should you happen to encounter a project that still uses SVN for version control, you should:
- Expect resistance to adopt modern workflows (e.g., GitFlow, CI/CD).
- Investigate whether the tooling supports migration to Git e.g.
git svnor if a full rewrite might be needed. - Evaluate whether SVN is tightly embedded in the build and deployment process.
- Prepare yourself for recruitment and retention issues as mentioned above.
Migration
Assuming you've stumbled across a legacy project that uses SVN, and modernisation and migration is a goal for the project. It's worth knowing that:
- SVN to Git migration tools exist (git svn or tools like SubGit), but edge cases can be painful.
- You will likely need to retrain teams and completely refactor deployment automation.
- Start with a small component or project to check see if modularisation is a feasible option.
Can I still use it?
TL;DR: Git / GitHub is the way to go, for modern web development best practice.
Chrome Frame
Chrome Frame was an ambitious project to bridge the gap between modern web standards and the many limitations of old IE versions (IE6 - IE8). It was released by Google as a plugin for IE in late 2009. It essentially embedded the Chrome browser engine into older IE versions, thus allowing the IE / Chrome hybrid to use modern web standards, modern JS frameworks, HTML5 features, all while still being compatible with an IE-centric environment. This proved particularly beneficial for larger enterprise organisations, which were reliant on older versions of IE due to legacy infrastructure and were unable to adopt more modern browsers.
While it sounded great in theory, unfortunately it came with a whole host of downsides, especially when it came to adoption. This included requiring admin access to machines in order to install the plugin, many companies locked-down the use of plugins due to the security risk involved in installation of 3rd-party code, lastly it introduced complexity for IT support teams and Quality Assurance (QA) teams due to the hybrid nature of the rendering engine.
Can I still use it?
No, as it was ultimately deprecated by Google in 2013, and support ended in 2014. The reasons for this were that web standards had improved and IE itself had improved with the release of IE9 and later. A significant change in browser releases was the move to "evergreen browsers". These browsers update automatically in the background, without user intervention. These browser releases were untethered from specific Operating system versions, apart from Safari being a notable exception.
Although Chrome Frame only saw limited success, it certainly helped initiate discussions on migrating from legacy browsers in large enterprise environments.
I distinctly remember when it was announced I thought it would solve all our IE problems (finally!). It was only when it was released that I realised there was no way it would be able to solve the issue because:
- It was complex to install (e.g. required admin access)
- The majority of users at the time using older versions of IE were likely neither technically capable nor even interested in installing it as a plugin.
- Note: This isn’t meant to sound elitist, but at the time, most people would have likely identified the internet as the "blue ‘e’ icon" on their desktop. Outside the web development community, few knew (or cared) what a web browser was, let alone which one they used! And I'd say this statement is most likely true with the modern web too!
CSS Resets
Before we get into the details, what is a CSS reset? It's essentially a set of CSS selectors and properties that are used to "reset" all styles across browsers to a common baseline. Think of it as a solid foundation on which to build your website off. In theory if all browsers render elements identically from the start, then the site will be easier to build and maintain because all those nasty minor cross-browser CSS differences will have been dealt with, that's the theory anyway.
Just to be clear, CSS resets are still around, but they have evolved into something more forward-thinking, minimal, and only focussed on common pain points. The first CSS Reset to be released in January 2007 was Eric Meyer’s classic CSS Reset, it quickly became one of the most widely adopted resets to try to standardise styling across the modern browsers at the time, these were Internet Explorer, Firefox, and Safari. It did this by removing all margin, padding, borders, and fonts to a common baseline. It could either be included within your own CSS file, or added as a separate CSS file at the start of your CSS in the <head> tag. The order is crucial because you're establishing a standardised foundation. This allows subsequent CSS files to override these resets, either through direct duplication, leveraging the cascade, or by increasing specificity. For example:
/* Basic Reset of the body styling: Specicifity score 0,0,1 */
body {
line-height: 1.5;
font-family: system-ui, sans-serif;
background: #fff;
color: #000;
}
/* Here this selector is overriding the one above because it comes after it in the cascade: Specicifity score 0,0,1 */
body {
font-family: "Comic Sans MS", Impact, sans-serif;
}
/* Here we are using CSS Specicifity to override the page background color: Specicifity score 0,1,1 */
body.colored-background {
background: #ff0000;
}This is why ordering your CSS correctly is best to do right at the start of a project. If you bring in a reset file at the end, it will either do nothing at all due to higher specificity CSS selectors before it, or it will completely undo lots of your styling, simply because it “wins” (by coming last in the cascade e.g. it’s the last CSS file loaded).
As mentioned above, CSS resets have evolved over the years. Normalize.css is a very common one in use on the modern web, as it works differently by preserving useful default browser styles and only fixing CSS styles that need to be fixed to maximise CSS consistency across modern browsers.
Other notable mentions are more modern, minimal resets that only focus on certain pain points in cross-browser rendering like box-sizing, responsive images, and font inheritance. These include Andy Bell's: A (more) Modern CSS Reset and Josh Comeau's: A Modern CSS Reset.
Kudos to the authors of CSS Resets! Their dedication makes CSS authoring significantly smoother for millions of developers across the world.
Hover-Only Interactions
Hover-only interactions are a legacy practice that suited desktop-only contexts but fail in today’s multi-device environment. An example of what a hover-only Interaction is:
.button:hover {
background-color: #ff0000;
}Hover-only Interactions come with the following issues:
- Not accessible on touch devices: Touchscreens do not have a hover state. This means hover-only functionality becomes inaccessible on phones and tablets, leading to broken user experiences.
- Not accessible on touch devices: Touchscreens do not have a hover state. This means hover-only functionality becomes inaccessible on phones and tablets, leading to broken user experiences.
- Lack of fallback interaction: Many legacy implementations didn't provide alternative means (like a click or focus) to trigger the same behaviour, effectively hiding essential UI or functionality.
- Keyboard accessibility problems: Hover interactions are not always accessible via keyboard unless explicitly paired with
:focusor JS handling. - Poor progressive enhancement: Relying solely on hover effects often ignored the principle of progressive enhancement, especially when essential content was hidden using CSS unless hovered.
- Inconsistent browser behaviour: Legacy browsers had quirks in how they handled hover states, particularly with complex layouts or when mixing JS and CSS interactions.
Modern Best Practice
UI's need to be device-agnostic and align with inclusive design principles. Hover-based interactions should be a supplementary interaction, not the primary interaction. In order to align with modern best practice you should:
- Avoid hover-only interactions for essential functionality.
- Use
:focusalongside:hover, and consider adding:focus-visibleto better support keyboard navigation. - Support click or tap events explicitly for mobile compatibility
- Provide visible indicators or alternative access methods (e.g. always-visible menus on small screens)
An example in CSS is:
.button:hover,/* mouse users */
.button:focus,/* element focused via click or tab */
.button:focus-visible { /* user likely on keybord, or other assistive technology */
background-color: #ff0000;
}The above can be simplified to avoid redundant styling as combining :focus and :focus-visible can sometimes cause overlapping or unnecessary duplication of visual effects. The recommended approach is to use the following as it keeps your styling clean and scoped, applying just what’s needed based on the user’s input method:
.button:hover,
.button:focus-visible {
background-color: #ff0000;
}By avoiding redundant styling means you reduce:
- Overlapping in CSS rules.
- Maintenance complexity.
- Risk of inconsistent behaviour between browsers.
- Slightly less to download.
9. Legacy Web Strategies
Blackhat SEO
Blackhat SEO refers to a collection of techniques that people tried to use to manipulate search engine rankings, mostly in ways that violate search engine guidelines, especially for guidelines laid out by Google.
Intent
So why would people want to use Blackhat SEO? Well, its sole focus was prioritising rapid results over sustainable growth. Consultants selling these techniques were focussing on exploiting weaknesses in search engine algorithms, rather than creating genuine value for users. Being at the top of the Google results page was the primary goal, and companies were willing to try these techniques to get an edge on their competition. That was until search engines got wise to what was happening, and started to penalise sites that employed these techniques.
Examples
Let's look over a few outdated examples and how they worked:
- Keyword stuffing: This was essentially stuffing as many keywords into a page in an attempt to trick the search engine into pushing a page to the top of the results for more search engine searches. So even if the keywords weren't at all related to the actual content, they were included anyway. Thankfully, search engines got wise to this tactic and cracked down on sites that used it.
- Cloaking: This is where you show different content to search engines than you do to users. Search engines would be shown pages with detailed, keyword-rich content about a specific product in order to trick a search engine into ranking the page highly. But the page shown to users was minimal and mainly promotional with very little or no helpful information on the page related to what it was being ranked on.
- Hidden text and links: This is the one I remember the most, using CSS or HTML to hide text on a page that was only intended for search engines. Think white colour text on a white background, it was that simple! It was also straightforward to spot, as you'd get pages where the scrollbar was huge, but the content on the page was very short. The overflow in the vertical direction was the hidden text that you could easily reveal by highlighting the text with your cursor!
- Link farms and paid link schemes: This is where companies would create hundreds, or thousands of low-quality content linking back to a specific page, in the hope that the search engine would rank the page highly because of all the backlinks to it. There were (and most likely still are) whole business's setup that promised to get you to the top of search results by essentially spamming the web like this. If you ever had a WordPress blog without the Akismet plug-in, you'd see this rapidly! WordPress was highly vulnerable due to its support for Trackbacks and Pingbacks (XML-RPC). Have a look at any old unmaintained WordPress blog, and you are likely to see this. They were so common at one point that a new term called "splogs" (spam blogs) was created for them. Wired.com wrote a blog post all about them back in September 2006 "Spam + Blogs = Trouble".
- Duplicate content: This is a simple strategy, copy another site's high-quality content and pass it off as your own. Thankfully, this now triggers de-ranking in search engines.
- Automated content: Basically automating the production of low-quality and spammy content. I anticipate that this technique will see a resurgence, given the recent surge in AI tools.
"This is why we can't have nice things!" The phrase echoes in my ears as I recall all the techniques mentioned above.
Why are they Outdated?
There are a number of reasons why these techniques are no longer used:
- Google and other search engines have significantly enhanced their algorithms to identify and penalise such manipulative tactics.
- Modern SEO is more geared towards user-first metrics, content relevance, quality, user experience, and even web performance are now taken into account when ranking a web page.
- Sites found to be using these Blackhat tactics are almost certain to be heavy penalised and may even be de-indexed completely from search engines.
- Companies found to be using these tactics on the modern web are very likely to suffer reputational and credibility damage. Some sectors that are heavily regulated will likely have legal implications too.
Can I still use it?
Fortunately, these blackhat techniques are no longer effective on the web. They are detrimental to both users and the internet, yet some individuals persist in attempting to use them.
For example, Google now ranks web pages on their usability issues, one of these is called Cumulative Layout Shift (CLS). This metric measures the stability of a page while a website is loading, websites that "shift around" while loading (called "jank") aren't scored as highly as those that are more stable.
I recently saw a new SEO technique using JS that would mask an entire page with a transparent element in order to trick the browsers CLS metric into thinking the page was completely stable. After page load this element would be deleted and the page could be interacted as usual. Basically, a modern day cloaking technique aimed at improving a pages Layout Instability API score.
So yes, it still happens and "this is why we still can't have nice things!".
“Above the Fold” obsession
The concept of "above the fold" is an outdated technique. The fold's position is not fixed; rather, it varies depending on the device used to view a web page. If we take this to the extreme. Viewing a website on a desktop widescreen device vs a mobile device, there's never going to be a common "fold" in this situation. Consider the vast number of device widths, ranging from a large desktop widescreen to a mobile device—literally thousands along the x-axis. If you then factor in the viewport height (y-axis), you're looking at millions of possible viewport permutations. Past assumptions are no longer true:
- User behaviour: Users scroll instinctively now. The old belief that users don’t scroll is no longer valid.
- Web performance evolution: Modern performance metrics (like Largest Contentful Paint (LCP) and Interaction to Next Paint (INP)) reward real user-perceived speed, not just fast above-the-fold content.
- Lazy loading and streaming: The web has moved towards prioritising meaningful content dynamically, rather than front-loading everything visible “above the fold”.
Can I still use it?
It depends. “Above the fold optimisation” is an older performance technique that focuses on rendering the visible portion of the page as quickly as possible. When used thoughtfully, it can still improve perceived load speed, especially in critical user flows. However, relying on it too heavily can narrow the focus to just a fragment of the overall experience.
Today, the more effective and sustainable approach is to optimise for end-to-end, user-centric performance. This includes not only what appears first on-screen, but also how quickly the page becomes usable and interactive. A strategy focused on delivering a consistently fast page experience will naturally improve the content visible without scrolling, regardless of the device.
Superseded compatibility approaches
Graceful Degradation
The technique of graceful degradation involves building a website to take advantage of all the modern features of a browser, and once completed add "fall backs" for browsers that don't support modern features.
Examples
An example of graceful degradation is: a developer builds a website where the main layout is using CSS Grid. But if a browser doesn't support Grid it will "fall back" to a simpler layout system like Flexbox or even a float-based layout (depending on the site's browser support requirements).
Another example is a feature-rich, JS-enhanced input form may fall back to a basic HTML form if JS is disabled or fails to load, for example, due to a poor or unstable network connection. In this case, core functionality (such as form submission) remains available, even though advanced features (like real-time validation or dynamic UI elements) are unavailable.
Why is it Legacy?
Graceful degradation is increasingly being considered a legacy approach in modern web development as it has largely been superseded by progressive enhancement which takes the opposite approach.
Why is it outdated?
This assumption of a modern baseline fails to acknowledge the true diversity of browsers and devices currently in use by users. Furthermore, comprehensive testing is challenging due to the difficulty of covering all scenarios involving older or limited browsers. Third, adding "fallbacks" further increases the complexity of an already intricate full-featured initial build. Most importantly, Graceful Degradation negatively impacts accessibility and resilience. Pages employing this technique frequently fail in low-capability environments, such as older browsers, devices, or poor connections.
Can I still use it?
There are a few scenarios where it may still be useful, these include:
- Legacy enterprise environments: for example a company that mandates the use of older browsers like Internet Explorer. A notable example of this is Banks and other financial institutions in South Korea. For a country, that ranks 12th on the internet adoption rate for its citizens (97.4% it 2025), it's a pretty surprising legacy issue they are still trying to tackle!
- Modernisation: If a website is in a transition phase of being modernised, and it still needs to support older browsers for a limited period.
- Non-critical enhancements: If a site has non-critical enhancements like animations or media features that are optional and don't impact access to the site's core content.
What should I use instead?
Progressive Enhancement is now the preferred approach for modern web development, offering a more robust, inclusive, and future-proof way to build websites and web applications. While Graceful Degradation was a useful technique for older browsers, it has now been superseded.
Browser Sniffing
This is the practice of detecting more information about a user's browser, like its specific version number or the operating system it is currently running on. Once detected a developer can use this information to “fork” their code, e.g. make decisions as to what bug workarounds should, or shouldn't be applied. Or even tailor the user experience for a specific version of a browser. Two very common uses of this technique in the past were to redirect a user to the mobile version of the site (when mobile and desktop sites built separately), or even blocking the usage of a site on "unsupported" browsers. An example of how you'd do this in JS is below:
if (navigator.userAgent.includes('Chrome')) {
// Apply Chrome-specific behaviour or simply block other browsers if you are feeling malicious
}This code highlights a significant problem with browser sniffing, demonstrating why it's considered an outdated technique. In the code, the whole functionality is decided by the fact that the User-Agent variable in the browser happens to include the string "Chrome". But what happens if Google one day decides to change this string to lowercase "chrome", or even change it completely? Well, the code depending on this detection will break!
Now, you could modify the above code to tackle the case issue like so:
if(navigator.userAgent.toLowerCase().includes('chrome')){
// Apply Chrome-specific behaviour
}But as you can see, this has only made the code more complex and fragile.
It's also worth mentioning that this code won't do what you expect it to either, as all Chrome based browser will return true. For example:
- Google Chrome
- Microsoft Edge (Chromium-based)
- Opera (also Chromium-based)
- Brave
- Vivaldi
At the time of writing each of the User-Agent strings for the above browsers contain: Chrome/115.0.0.0 (as well as other information, that I have removed for the example).
All contain "Chrome" in their User-Agent, so will all run the code.
What's worse is that Chrome on iOS will return false and not run the code. On iOS, all browsers, including what appears to be Chrome on the home screen, are actually forced to use WebKit (Safari). Consequently, "Chrome" in this instance isn't truly Chrome and is not reflected in the User-Agent String.
Other issues
Fragility isn't the only issue seen when using this technique. It can also:
- add a maintenance burden for developers, as this logic will need to be updated as browsers evolve.
- create browser feature mismatch, two versions of the same browser don't always support the same features.
- cause accessibility risks leading to user exclusion. A user on a less-common browser or assistive technologies, could inadvertently receive a degraded experience, or even blocked completely.
Can I still use it?
Realistically, no, you should aim to avoid browser sniffing entirely and instead of asking which browser a user is using, ask what can their browser do? Essentially, you want to be detecting the features that the user's browser can support. For example, to detect if a browser supports the Service Worker API, you can do this:
if ('serviceWorker' in navigator) {
// The users browser supports the 'serviceWorker' API, so do Service Worker stuff!
}Browser Sniffing Summary
In summary, browser sniffing is a legacy technique that should be avoided on the modern web. In order to create a more resilient and inclusive web, you should use Feature Detection, Graceful Degradation, and Progressive Enhancement instead.
Modernizr
I was a big fan of Modernizr (with its very Web 2.0 name!). For readers who've not used or heard of it, Modernizr is a HTML5 and CSS3 feature detection library, It does this via browser feature detection, rather than browser user-agent strings, which can be unreliable and misleading. Modernizr actually tests to see if the browser being used supports a whole host of features.
It was released in 2009 at version 1.0, since then, it has had 27 releases, and 300 contributors. So how does it work, and how exactly do you use it? Here's an example of how it detects flexbox support in a user's browser.
// Adds a new test to the Modernizr object under the key 'flexbox'
Modernizr.addTest('flexbox', function () {
// Create a new HTML div element to test CSS properties on
var testElement = document.createElement('div');
// Attempt to set the display property to 'flex'
testElement.style.display = 'flex';
// Check if the browser retains the value 'flex' for the display property
// If supported, the style will remain 'flex'; otherwise, it may remain empty or be changed
return testElement.style.display === 'flex';
});And here's how you would use that in your website. There are 2 methods for how you use it:
CSS
Modernizr adds a class to the <html> element, in our case above it would be:
<html class="flexbox"></html> <!-- If the browser supports it. -->
<html class="no-flexbox"></html> <!-- If the browser doesn't supports it.Thus, you could hang any styles you needed off these classes and tweak the CSS layout for both scenarios, flex support and no flex support.
JavaScript
The other method is to use it in JS by examining the Modernizr object:
if (Modernizr.flexbox) {
// Flexbox is supported, do Flexbox JS stuff here
} else {
// Flexbox is not supported, no non Flexbox JS stuff here
}The great thing about Modernizr is that even thought it supports over 250 feature tests for HTML5, CSS3, and JS APIs, it allows developers to create custom-builds that only detect the features they intend to use. This improves performance in 2 ways:
- Less JS to download, parse, then execute.
- Fewer bytes sent over the network.
Use cases:
- Applying fallback CSS or JS for unsupported features.
- Progressive enhancement strategies.
- Conditional polyfill loading.
- Responsive design tweaks based on feature support (via JS)
Can I still use it?
It's still useful, but it isn't as essential as it once was, this is because:
- Browser standardisation has vastly improved since 2009.
- Native CSS now support feature queries (
@supports). - Feature detection is often built into some JS frameworks.
- Support for ancient browsers has declined.
Ironically, Modernizr, despite detecting over 250 features, does not detect JS support itself. This isn't a problem, though, because if you require this functionality it can be added via a single line of JS in the <head> of your page:
<!-- Default setup -->
<html class="no-js">
<head>
<script>
document.documentElement.className = document.documentElement.className.replace('no-js', 'js');
</script>
</head>By default, the script assumes that the browser doesn't support JS (class="no-js"). When it gets to the inline script tag, the script executes. And as this proves that JS is supported it swaps the no-js class to a js class that can then be used in CSS styling, as you would any other Modernizr CSS class as demonstrated above.
10. Tests and Standards of Yesteryear
Acid2 and Acid3 Tests
The Acid2 and Acid3 tests were really clever ways to test a browser's compliance with the ever-evolving rendering standards at the time. They were both created by the Web Standards Project (WaSP)which was founded in 1998, when the web was a battleground of two main browsers:
- Microsoft (Internet Explorer 4 at the time)
- Netscape (Netscape Navigator 4.05 at the time)
The Web Standards Project, aimed to promote web standards that made development simpler, more accessible, and future-proofed, by working closely with browser vendors and development toolmakers to achieve this. When the team posted their final blog post in March 2013, their mission was largely complete. It resulted in successfully getting browser vendors to support the standards set by the World Wide Web Consortium (W3C). As of 2025, the W3C remains active in setting new standards to ensure the web continues to support communication, commerce, and knowledge-sharing for all, with a strong focus on accessibility, diversity, and inclusion.
Acid2 (2005)
The Acid2 test was created to test compliance with HTML 4.01, CSS 1 & 2, and PNG rendering standards (e.g. alpha transparency). It did this by focussing on the following key areas of browser rendering:
- Box model
- Absolute and relative positioning
- Float behaviour
- Table layout
- PNG alpha transparency
- Data URLs
This was achieved through a highly innovative browser test: creating a simple cartoon face, similar to a smiling emoji. Depending on how compliant the browser was, influenced how well the face was rendered. It's much simpler if I just show you!
Rendering reference
This is what the output of the test is supposed to look like across all browsers.

Internet Explorer 6
This "face," rendered with IE6, appears as if the individual suffered a severe accident!

Netscape 4.8
Netscape performed similarly to IE6 at the time.

Mozilla Deer Park Alpha 2 (later Firefox 3)
Finally a browser that actually rendered a face!

There are far too many variations and versions of browsers around at the time to list here, but if you are interested in how other browsers rendered the Acid2 test check out this ancient blog post by author Mark "Tarquin" Wilton-Jones. Mark, thank you for safeguarding this significant and captivating piece of web standards history!
Acid3 (2008)
After the success of the Acid2 browser test in putting pressure on browser vendors to improve standards support in their browsers, it was decided by the WaSP team to create another browser test, this time focussing on a different set of browser technologies. These included:
- DOM Level 2 and 3
- ECMAScript (JS) behaviour
- CSS 3 selectors
- SVG rendering
- Data URIs
- Animation timing and rendering
- WebFonts via
@font-face
Rendering reference
The Acid3 test took a more traditional testing route and simply scored a browser taking the test between 1 (poor support) and 100 (perfect support).

In my opinion, the Acid3 test, while practical and easy to interpret, lacked the entertainment value of the Acid2 facial disfigurement test!
The release of the more challenging Acid3 test coincided with a surge in browser competition, particularly among Firefox, Safari, Opera, and Google Chrome (which was released later in 2008).
Scores
At the time of release (March 2008) the Acid3 scores for each major browser were as follows:
- IE7: 12 / 100.
- IE8 Beta 1: 18 / 100.
- Firefox 2: 50 / 100.
- Firefox 3 Beta 4: 71 / 100.
- Opera 9.5 Beta: between 60–70 / 100.
- Safari 3.1: between 75–90 / 100.
- Google Chrome: Not yet released in March 2008.
- Google Chrome 0.2 Beta: first release: 77 / 100.
- Google Chrome 1.0: 100 / 100.
Google Chrome quickly improved its Acid3 score shortly after its initial release. This rapid improvement was mainly due to its use of the Safari WebKit engine, which already scored 75–90 out of 100 at the time.
Legacy
Neither test is maintained or relevant to the modern web, but they played a key role in pushing browser vendors toward better standards support. Today, browser compliance is measured using the Web Platform Tests (WPT), a much broader and actively maintained suite developed by the vendors themselves with input from WHATWG and W3C.
11. What Still Matters – Progressive Enhancement
Not legacy but often forgotten
Congratulations! You've made it! After discussing countless legacy approaches and techniques best left in the past, you've finally arrived at a truly timeless and Incredibly important methodology. More than two decades after Steve Champeon and Nick Finck introduced it in their talk "Inclusive Web Design For the Future” at SXSW in 2003. The Progressive Enhancement (PE) methodology remains one of the most robust and future-ready methods for modern web development.
What is Progressive Enhancement?
There's a ubiquitous diagram that is always shown whenever PE is mentioned in a blog post. And this blog post will be no different in using it, as it actually explains the concept incredibly well.

Here we have the well known Progressive Enhancement pyramid.
HTML
The HTML is at the bottom of the pyramid as it gives the website a solid foundation on which to build off. The HTML layer is the most resilient layer in the web development stack. Without the HTML there is no content, no links, no images, no website! This layer is by far the most important layer in the pyramid. Just to give you an idea of how resilient HTML is in web browsers, let's take a look at the very first website. Back on Tuesday, August 6, 1991, Sir Tim Berners-Lee, the inventor of the Web, published the very first website! Now, this statistic makes me feel ancient! The first website was published almost 34-years ago, at the time of writing! And what do you notice about the page? Well, most importantly, it still renders correctly, and the content can still be read perfectly well after all this time. If you take a peak at the page's source code, you will notice a few oddities, like:
- The complete lack of a DOCTYPE, and no
<head>tag. - No links to external stylesheets or JavaScript (they weren't invented yet!)
- Anchors existed to link to other pages, but they had a strange
NAME=[integer]attribute. - All elements were written in uppercase, e.g.
<HEADER>,<BODY>,<TITLE>,<H1>. - Lack of Semantic Markup. This was to come later once the WWW had matured.
To put it into perspective, this website has been around longer than nearly half of the entire global population, that’s over 4 billion people younger than this single page on the internet! Which other digital format on the planet can boast that form of robustness and ease of accessibility? Just think about all the storage media formats that have come and gone in that time:
- 5.25-inch floppy disk (the disk that was actually floppy)
- 3.5-inch floppy disk (the 3D "save icon")
- CD-ROM / CD-R / CD-RW
- MiniDisc (data version)
- CompactFlash (CF)
- Zip disk
- Jazz drive
- DVD-ROM / DVD±R / DVD±RW
- Blu-ray
- HD DVD
The list goes on… The point is clear: if you need a long-term, reliable storage solution that just works, plain HTML on the web is hard to beat! FYI: Of course, these web assets ultimately reside on physical hardware in data centres, but that’s not the point. What matters is the resilience and accessibility the web platform offers, regardless of the underlying infrastructure.
CSS
The second layer in the pyramid is the CSS or the Cascading Style Sheets. When the World Wide Web (W3) was first invented back in the early 90s, CSS simply didn't exist. It wasn't until 1996 / mid-1997 that browsers started to support the CSS Level 1 Specification. The browsers at the time were Internet Explorer 3 and Netscape Navigator 4, both had partial (and mostly buggy implementations). Up until this point the web had been completely "naked" in terms of design. Just pages full of text, images, and the odd animated GIF. Nothing at all like the modern web we see today.
CSS constitutes the second layer of the pyramid because, frankly, it is a "nice to have." Browsers are equipped with default stylesheets (as previously discussed in the CSS Resets section), which enable HTML content to display correctly and remain readable even in the absence of a website's custom styles. A company or brand must ensure their CSS is available for browsers to download so that their website renders correctly. Without it, many users would assume the site is broken, especially given how modern websites are expected to appear. But in the unlikely event the CSS fails to load, users will still receive the HTML content in a perfectly readable format. While it may appear unappealing, it remains fully functional across all current (and all future) web browsers and assistive technologies.
The beauty of Progressive Enhancement lies in establishing a foundational layer (HTML) and then progressively adding desired features. This method ensures that if any subsequent layer fails, the underlying content and functionality remain accessible to users.
A prime example of this browser feature in action is, since April 9, 2006, CSS Naked Day has been observed. Where for a 50-hour period, website owners disable their site's CSS, allowing users to experience the semantic HTML without styling. It started as a push for web standards and semantic markup, and gave site owners an excuse to flaunt their sexy <body>. Gotta love a good HTML pun!
JavaScript
The final layer of the pyramid and the final piece of the Web stack puzzle is JS, this is the interaction layer that is added to a site last after the foundation (HTML) and design (CSS) have been added. It's difficult to believe that just three technologies, all of which have been discussed in this section, form the entirety of the web. There is truly nothing more to it than these three foundational components. Ultimately, the output of both frontend and backend development invariably consists of standard HTML, CSS, and JS. Although a multitude of tools and languages are available for web developers to use, with endless paths to choose from, they eventually all lead to identical HTML, CSS, and JS as their final output. It all comes down to using the right tool and technology for the job!
JS is deliberately placed as the final enhancement layer in the pyramid. This is not incidental. JS, while powerful, is the least resilient layer in the web stack. Its execution depends on multiple fragile components:
- the network
- the parser
- the runtime environment
- the integrity of the code itself.
A single misplaced character, for example, an errant semicolon or an undefined variable, can render entire swathes of interaction inoperable. This fragility is not a hypothetical risk. It manifests regularly across production environments all over the web, particularly where sites are heavily reliant on client-side code for core user journeys.
Progressive Enhancement Summary
The modern web has increasingly drifted away from the principles of Progressive Enhancement, often placing JS as the foundation rather than the finishing touch. Single Page Applications are a prime example, where even basic navigation and content rendering require full JS execution. This inversion of the pyramid not only risks total inoperability in degraded environments but also introduces avoidable accessibility and performance issues.
From a resilience and user experience standpoint, over-reliance on JS creates brittleness. Unlike HTML and CSS, which both degrade gracefully, JS fails noisily and catastrophically. If a CSS file fails to load, a page might look plain but will still remain usable. If a JS bundle fails, the entirety of the website's features may be lost, with little to no fallback available.
The web’s reach includes users with:
- unreliable networks
- older devices
- constrained data plans
- assistive technologies
A heavy dependence on JS frequently excludes these users or significantly worsens their experience. Progressive Enhancement is not about supporting “no JavaScript” users as a niche edge case. It’s about ensuring a robust baseline that works for everyone, every time, demonstrating empathy for all users regardless of how they access the internet.
While JS is a vital tool in a web developer’s toolkit, it must be handled with care. Its position at the top of the Progressive Enhancement pyramid reflects its power, but also its fragility. It should be used responsibly, with the awareness that its failure often leads to a broken experience. True resilience comes from building upwards from stable foundations, not downwards from brittle interactions.
Importance in government services
Having worked at GDS for 6-years, I can't tell you how many times I had to defend the frontend communities stance on Progressive Enhancement! Thankfully, it's all written in black and white in the Service Manual for all to read. However, some departments and developers found ways to work around the methodology or opted for alternative approaches. This was most likely driven by two things:
- A team had made significant progress with their JS dependent service and were expressing concerns about meeting the requirements for their future service assessment(s).
- Some Frontend Developers in the department were enthusiastic about adopting the latest client-side frameworks, with less emphasis on assessing their maturity or suitability for the service, and its users.
For point 1 it always amazed me that teams were able to get so far into prototyping for it to become an issue. As depressing as it may be to me, maybe the Service Manual and its guidance isn't as well-known across government as I'd like to believe?
For point 2, I 100% get it, new technology on the web is fun to play with and also to have on your CV / Resume! The real question is whether this new technology is truly the right choice for a critical public service that every UK taxpayer depends on and has a fundamental right to access?
Technology Suitability Check (Progressive Enhancement Focus):
- Does the technologies core functionality allow the service to work without JS enabled?
- Can the service still function reliably on low-powered or older devices when using this technology?
- Is the final output from the technology easily accessible and usable with assistive technologies, regardless of the device used?
- Does the technology degrade gracefully in poor network conditions, such as on a 3G connection or in rural areas?
- Are all critical user journeys still functional when JS fails to load, or is blocked?
If the answer to any of the questions above is "no", then the technology probably isn't a great fit for a public service that needs to be maintained for years (or even decades!).
The last point in the list above is incredibly important:
Are all critical user journeys still functional when JS fails to load, or is blocked?
I think this is where there was a lot of misunderstanding in terms of Progressive Enhancement in government. I continually strive to highlight that a JavaScript-only journey doesn't require a direct 1-to-1 correlation with its progressively enhanced foundation. As long as for each user journey a user can complete their task, quickly and easily, then the use of JavaScript is fine.
For example, consider a feature-rich JavaScript dashboard built to enter user data into a backend database. If a simple HTML form with a submit button can achieve the same outcome (which it often can), then the dashboard is acceptable only if the HTML form provides a reliable fallback for situations where JavaScript is unavailable, such as when it fails to load due to a limited data plan, a poor connection, or a low specification device.
During a service assessment, the key question is whether the dashboard meaningfully improves data entry and user interaction, or whether it exists purely for the sake of using new technology. Adopting new tools without clear justification is not acceptable for a government service. I consider such an approach to be driven by a desire to boost a developer's CV or LinkedIn profile. E.g. CDD (CV-Driven Development) or LDD (LinkedIn-Driven Development).
12. Lessons for the Future
What these legacy practices teach us today
If there is one takeaway from this post, it is that Frontend Development has never stood still. What counts as best practice today can feel outdated or even “legacy” tomorrow. That constant state of reinvention is what first drew me in back in the late 90s, and it is what continues to excite me now. Backend development always seemed a little too steady, too predictable. Frontend, on the other hand, has always lived on the edge of change.
But what is different today is the scale of change ahead of us. With the rise of Artificial Intelligence (AI), we may be standing on the edge of a shift as significant as the birth of the modern Web itself. Just as the early internet reshaped how we live and work, the combination of AI and the Web could redefine what it even means to build, design, and interact online. The coming years are not just going to be interesting, they could mark a turning point in the history of our craft.
Applying lessons to modern frontend work
Core principle
Choose optimal solutions for the enduring parts of the stack: HTML, CSS, and JavaScript are the stable contract. Prioritise the most straightforward and maintainable approach to delivering clean, accessible HTML, efficient CSS, and lightweight, well scoped JavaScript. The further you move away from those three, the harder accessibility, maintenance, and performance become. Let the browser and the web platform do the heavy lifting.
Practical rules to work by
1. Start from the Web Platform
Prefer native elements and platform features before adding libraries. Use form controls, semantic HTML, CSS layout, media queries, inert, details and summary, dialog, fetch, URLPattern, IntersectionObserver, and Web Components where they fit.
2. Progressive enhancement as a default
Deliver meaningful HTML first, enhance with CSS, then layer JavaScript for interactivity. Critical journeys should still work when scripts fail or load slowly.
3.Ship less code
Adopt a dependency diet. Each abstraction must earn its keep through measurable value. Small utilities over frameworks by default. If a framework is chosen, configure it to output lean HTML, CSS, and JS.
4.Accessibility first, not last
Use semantic structure, proper labels, roles only when needed, visible focus, real buttons and links, reduced motion preferences, and test with keyboard and screen readers. Performance 100% is an accessibility feature.
5. Performance budgets and baselines
Set budgets for bundle size, interaction latency, and memory. Track Core Web Vitals from real users. Fail builds that exceed budgets. Optimise for first input delay, input responsiveness, and low CPU use on mid-range devices.
6. Keep the build simple
Prefer standard tooling that converges to web standards. Use the minimum build steps required. Long pipelines increase failure modes and slow iteration.
7. Design for resilience
Favour server rendering for first paint, hydrate only what is interactive, cache well, and handle partial failure gracefully. Make error states explicit.
8. Document the escape hatches
Where you choose abstractions, document how to reach the underlying HTML, CSS, and JS. Future teams should be able to debug without learning a bespoke stack.
9. Measure before you change
Add observability. Use Real User Monitoring (RUM) to guide work. Optimise the slowest real user paths, not synthetic microbenchmarks.
10. Plan for upgrades
Last, but not least, prefer tools with clear deprecation policies and migration paths. Avoid lock in. Isolate framework code behind simple boundaries so you can replace parts without rewriting the product.
A quick decision test
- Can this be done with native HTML or CSS alone?
- If not, can a few lines of vanilla JS do it without a dependency?
- If not, does a library reduce long term cost and keep output close to the platform?
- If a framework is still justified, can it produce accessible HTML by default and degrade gracefully?
Closing thought
Technologies come and go, but the contract with the browser remains. Choose the simplest path that produces high quality HTML, CSS, and JavaScript. The closer you stay to the platform, the easier your product will be to maintain, to make accessible, and to run fast at scale.
Post Summary
I have to admit, my posts always seem to take on a life of their own and end up being longer than I plan. Concise writing might be a goal for another day. If you made it all the way here, congratulations, you’ve officially joined the “end of post club”! I really hope you found this journey as enjoyable to read as it was for me to write. Revisiting these ideas was a real trip down memory lane, and it reminded me of things I hadn’t thought about in years.
Your thoughts and feedback are always welcome. If I’ve overlooked a method or technique you think deserves a mention, let me know and I’ll happily credit you in the changelog. Thanks again for sticking with me to the very end, and if you’d like to share your thoughts, you can do so here.
Post changelog:
- 26/08/25: Initial post published.
- 27/08/25: Fixed number ordering of headers.
- 27/08/25: Added Table of Contents for easier navigation!
- 28/08/25: Added Silverlight and Java Applets to the post (Thanks to an AI hallucination, regarding browser plugins from the past)
- 17/09/25: Thanks to Sven Kannengiesser for drawing my attention to the use of ‘here’ anchors. I have now revised them to be contextual and accessible.