# Nooshu - Matt Hobbs - Frontend web developer, turned engineering manager. ## This is the website of Matt Hobbs, who is a Frontend Engineering Manager from Oxfordshire, UK. URL: https://nooshu.com --- Start: Asset fingerprinting and the preload response header in 11ty Published on: 02 September 2025 https://nooshu.com/blog/2025/09/02/asset-fingerprinting-and-the-preload-response-header-in-11ty/ Main Content: This blog post will be building on a number of blog posts that I wrote earlier in the year. These were the posts: Using an 11ty Shortcode to craft a custom CSS pipeline Cranking Brotli up to 11 with Cloudflare Pro and 11ty The Speed Trifecta: 11ty, Brotli 11, and CSS Fingerprinting Some insights from my earlier posts may carry over here, so check them out for overlap or a fuller view of my custom CSS pipeline for 11ty. In this post, I’ll improve performance by adding the preload technique to my blog. First, let’s look at what it is. Preload basics In a standard web page load, once requested, the server sends over the HTML document as well as numerous response headers too. The HTML is progressively served to the browser, and it is only when the parser encounters the standard link that the browser requests the CSS file from the server. Therefore, best practice is to place this CSS link as close to the top of the tag as you can. This ensures that the browser sees it quickly and thus starts downloading it as soon as it can. But what if you could give the browser a "hint" as to what is coming up in the document? This is where the Preload hint functionality comes in. The Preload hint is essentially saying to the browser: I know you're busy doing other things at the moment, but you should also know that you absolutly will be requiring this file soon in the page load. So stick it at the top of your list to download as soon as you can. It's important to realise that this is only a "hint", not a mandatory instruction. The browser may choose to entirely ignore it, if for example it has already parsed and discovered the file you wish for it to preload. There are 2 ways in which you can implement a preload. 1. Link in the head This is probably the easiest way to add a preload to a website. Stick it in the : 2. Preload link header This method ensures that the browser is told about what other resources to load along with the HTML document in the form of a response header from the server. The above "Link in the head" functionality looks like this as a response header: Link: ; rel=preload; as=style Link: ; rel=preload; as=script OR Link: ; rel=preload; as=style, ; rel=preload; as=script Both headers give the exact same functionality, it just comes down to readability. I don't believe the single line version give any performance advantages, especially when any form of header compression is applied, e.g. hpack for HTTP/2 or qpack for HTTP/3. But please do let me know if this assumption I'm making isn't true! In both instances above, we are telling the browser to preload the page’s CSS and JavaScript because they will be required to render the page. You may notice that I phrased it as “will be required”. This is deliberate because it is far too easy to abuse the preload functionality. If you instruct the browser to preload everything, you will likely harm web performance instead of improving it. So only preload assets that are genuinely needed for the page to render. Otherwise, you risk wasting bandwidth on unnecessary resources and slowing down the rendering process. I know Firefox warns you in the DevTools console if an asset has been preloaded but not used during a certain time period, other browsers may do this as well. So always check your browser console for similar messages. Preload and fingerprinting There is a small added challenge when using asset fingerprinting with the preload functionality. Since the filename of the CSS or JavaScript changes completely whenever the file contents change, you cannot simply preload index.css or main.js. They will instead be renamed to something like index-362ccd3816.css or main-2fc0e9cad0.js. These are just example names, but the important point is that the file names are unpredictable and will change with each build, assuming the content of the files change. Since nobody wants to update a preload reference by hand every time a file changes, this is where a bit of 11ty scripting magic steps in to save the day. The Code In order to roll this functionality into my 11ty build, I created a helper file in my _helpers directory in the root of my blog. This is called header-generator.js. Imaginative name, huh! All functionality related to the header generation will be contained within this file. It is then imported into my eleventy.config.js like so: import { generatePreloadHeaders } from './_helpers/header-generator.js'; Now, I only want this code to run in production, and after the 11ty build completes, so I added the following later in the config: if (IsProduction) { eleventyConfig.on('eleventy.after', generatePreloadHeaders); } Hopefully, this code is fairly self-explanatory. I'm hooking into the eleventy.after event, which is the point at which my CSS has been Brotli compressed and fingerprinted, and the Link Header is ready to be generated and added to my Cloudflare Pages _headers file (documentation here) before the _site is built. Below is the complete header-generator.js file I am using, with detailed comments to make it easier to follow: // standard node library imports import fs from 'fs'; import path from 'path'; // This script generates preload headers for fingerprinted CSS files in the _site/css directory // and adds them to the global section of the _headers file (/*) in the _site directory. // It prefers Brotli-compressed files (e.g. *.css.br rather than *.css) if available. export function generatePreloadHeaders() { // Log the start of the process console.log('Generating preload headers for CSS files...'); // This is my CSS directory for my blog const cssDir = path.join('./_site', 'css'); // Check if CSS directory exists if (!fs.existsSync(cssDir)) { console.log('CSS directory not found, skipping header generation'); return; } // Find fingerprinted CSS files (both .css and .css.br). We prefer .css.br if available. // Fingerprinted files match the pattern index-[hash].css or index-[hash].css.br const cssFiles = fs.readdirSync(cssDir) .filter(file => { // Match files like index-b9fcfe85ef.css.br or index-b9fcfe85ef.css return file.match(/^index-[a-f0-9]{10}\.css(\.br)?$/); }); // Nothing found so exit if (cssFiles.length === 0) { console.log('No fingerprinted CSS files found, skipping header generation'); return; } // Sort to prefer *.br files over *.css files // (compression is done via the zlib library in another helper file) // both .css and .css.br files exist in the same folder with the same file hash // The hash is generated from the unminified and uncompressed CSS file) cssFiles.sort((a, b) => { // If a is .br and b is not, a comes first if (a.endsWith('.br') && !b.endsWith('.br')) return -1; // If b is .br and a is not, b comes first if (!a.endsWith('.br') && b.endsWith('.br')) return 1; return 0; }); // Take the first (preferably .br) file const cssFile = cssFiles[0]; // Construct the path for the Link header const cssPath = `/css/${cssFile}`; console.log(`Found CSS file: ${cssFile}`); // Now we need to read the existing _headers file, add the preload header to the global section (/*), // and write it back try { // Read the source headers file const sourceHeadersPath = path.join('./public', '_headers'); // Set our target headers file const targetHeadersPath = path.join('./_site', '_headers'); // Check if source _headers file exists if (!fs.existsSync(sourceHeadersPath)) { console.log('Source _headers file not found'); return; } // Read the existing headers content let headersContent = fs.readFileSync(sourceHeadersPath, 'utf8'); // Create the preload header // Note: 'nopush' prevents Cloudflare from doing HTTP/2 server push const preloadHeader = ` Link: <${cssPath}>; rel=preload; as=style; nopush`; // Find the global /* rule and add the preload header to it // Look for the line that just contains "/*" which is the global section const lines = headersContent.split('\n'); let globalSectionIndex = -1; let nextSectionIndex = -1; // Find the global section (line that starts with just "/*") for (let i = 0; i < lines.length; i++) { if (lines[i].trim() === '/*') { globalSectionIndex = i; break; } } // Find the next section (line that starts with a path) if (globalSectionIndex !== -1) { for (let i = globalSectionIndex + 1; i < lines.length; i++) { if (lines[i].trim() !== '' && !lines[i].startsWith(' ')) { nextSectionIndex = i; break; } } } // If we found the global section, proceed to add or update the Link header if (globalSectionIndex !== -1) { // Check if a Link header already exists in the global section let linkHeaderExists = false; // Iterate through the lines in the global section for (let i = globalSectionIndex + 1; i < (nextSectionIndex === -1 ? lines.length : nextSectionIndex); i++) { // Check for the existance of the Link header if (lines[i].includes('Link:')) { // Replace existing Link header lines[i] = preloadHeader; // The Link header exists and has been updated linkHeaderExists = true; break; } } // If no Link header exists, add one if (!linkHeaderExists) { // Find the last header line in the global section let lastHeaderIndex = globalSectionIndex; // Iterate until the next section or end of file for (let i = globalSectionIndex + 1; i < (nextSectionIndex === -1 ? lines.length : nextSectionIndex); i++) { if (lines[i].trim() !== '' && lines[i].startsWith(' ')) { lastHeaderIndex = i; } } // Insert the Link header after the last header lines.splice(lastHeaderIndex + 1, 0, preloadHeader); } // rejoin the modified _headers file headersContent = lines.join('\n'); } else { console.log('Could not find global section in _headers file'); return; } // Write the updated headers to the _site directory before deployment to Cloudflare pages fs.writeFileSync(targetHeadersPath, headersContent); console.log(`Generated preload header: Link: <${cssPath}>; rel=preload; as=style; nopush`); } catch (error) { console.error('Error generating preload headers:', error); } } For a cleaner version without comments, I’ve uploaded the code to a Gist. Find the code Gist here. The Cloudflare _headers file This setup is currently working really well, the only minor "issue" that doesn't sit right with me presently is the fact that the Link header sits on the global header path (/*) in the _headers file. This means the Link header is added to all assets served from my blog. As far as I know, this shouldn't cause any issues, as browsers will just ignore it if on a file type that doesn't support it. But I would like to rectify this in the future. In my testing with the Cloudflare _headers file, once a header is set in Cloudflare Pages it cannot be removed or overwritten. The only "fixes" I’ve found for this are: Use a response header transform rule in the Cloudflare dashboard to remove the Link header from all other file types served except CSS. Look into a Cloudflare Workers solution to examine the server responses at "the edge" and remove them that way. I will eventually move forward with option 1 as it looks to be the most straight-forward way to do it. I've mentioned this "minor issue" in the blog post, simply to highlight the fact about not being able to remove headers using the _headers file once they have been set. If anyone knows how to do this using only the _headers file, please let me know. I’d love to learn how. Summary All this is now live on this very blog, so using my custom CSS pipeline with 11ty, I now have the following happening before deployment to live: CSS minified and Brotli compressed to level 11 (highest). CSS Asset fingerprinting to allow for long life cache-control headers, including immutable. Preloading of the CSS file to reduce the discovery time and improve page performance. On a side note: This is probably one of the fastest (and shortest) blog posts I've written in a while! I knew I could do it! 🤣 I hope you found it as enjoyable to read as I did to write. As always, feedback and post corrections are welcome. Spot anything wrong? Please do let me know. Post changelog: 02/09/25: Initial post published. --- End: Asset fingerprinting and the preload response header in 11ty --- Start: Hack to the Future - Frontend Published on: 26 August 2025 https://nooshu.com/blog/2025/08/26/hack-to-the-future-frontend/ Main Content: Table of Contents Hack to the Future - Frontend 1. Introduction Context Looking back at "legacy" practices Lessons we can apply today 2. Setting the Time Circuits to the late 90s My first website build The late 90s web landscape 3. The Early Web - Layout and Design Practices Photoshop PSDs as the single source of truth Frame-Based Layouts Table-Based Layouts Quirks Mode Layouts Fixed Width Fonts for Responsive Text 4. The Plugin Era – Flash and Friends Flash-based content Scalable Inman Flash Replacement (sIFR) Cufón GIF Text Replacements Adobe AIR Yahoo Pipes PhoneGap / Apache Cordova Microsoft Silverlight Java Applets 5. The JavaScript Library Explosion DHTML Beginnings (1997) Prototype.js (2005) Script.aculo.us (2005) Dojo Toolkit (2005) Yahoo! User Interface (YUI) (2006) moo.fx (2005)) MooTools (2006) jQuery (core) (2006) Ext.js (2007) jQuery UI (2007) AngularJS (2010) Backbone (2010) Knockout (2010) 6. CSS Workarounds and Browser Quirks Old CSS practices Sliding Doors Technique Image Sprites for Icons Vendor Prefixes for CSS Heavy Use of !important in CSS OldIE hacks DOCTYPE fragility zoom: 1 hack Underscore Hack Asterisk Hack Star HTML Hack Child Selector hack Double Margin Float Bug Peekaboo bug fix Transparent PNG fix Lack of IE Developer Tools IE Conditional Comments IE CSS Selector Limit 7. Markup of the Past XHTML 1.1 and 2.0 Inline JavaScript Document.write() Fixed Viewport Meta Tags Web Safe Fonts Only (before @font-face) 8. Tools and Workflow Relics SVN (subversion, largely replaced by Git) Chrome Frame CSS Resets Hover-Only Interactions 9. Legacy Web Strategies Blackhat SEO "Above the Fold" obsession Superseded compatibility approaches Modernizr 10. Tests and Standards of Yesteryear Acid2 and Acid3 Tests 11. What Still Matters - Progressive Enhancement Not legacy but often forgotten What is Progressive Enhancement HTML CSS JavaScript Progressive Enhancement Summary Importance in government services 12. Lessons for the Future What these legacy practices teach us today Applying lessons to modern frontend work 13. Post Summary 1. Introduction Context So over the last few months at work, I've been conducting interviews to hire Frontend Developers for a number of new projects we have in the pipeline. It was only when looking at CV's that it struck me, a lot of these candidates weren't even born when I first started in my Web Development career! So I thought maybe developers getting into a Frontend Developer career today, may want to learn a bit about what it was like when I first started (that sentence just makes me feel old! 👴) Looking back at “legacy” practices Why would we want to look back on legacy best practices on the web? Other than the obvious academic and for general interest reasons? Studying past best practices and legacy systems is crucial for understanding the evolution of technology and making informed decisions today. By examining the problems old practices were designed to solve, we gain a deeper appreciation for current best practices and avoid repeating past mistakes. As the philosopher George Santayana once said: Those who cannot remember the past are condemned to repeat it. This historical perspective also reveals enduring principles like progressive enhancement, which remains vital for creating accessible and resilient systems on the web. Lessons we can apply today For developers, understanding past methodologies is essential for properly maintaining and modernising existing systems in the future without causing critical failures. This historical knowledge will ultimately help them navigate the complexities of older codebases, to ensure they make informed decisions about how to update or replace components. Above all, reflecting on the past can help us come up with creative new ideas and prevent us from blindly following new trends. This perspective also provides a comprehensive view of how the web has evolved, grounding our current practices in a deeper understanding of the technology's history. This process of building on past knowledge is a fundamental aspect of human progress. Just as civilizations learn from historical events to avoid repeating mistakes, developers can learn from the successes and failures of past technological eras. It's how humanity has always evolved. By building upon the accumulated wisdom and experience of those who came before us. By studying the mistakes and triumphs of the past, we improve our own work and contribute to the continuous cycle of innovation and learning that drives our entire industry forward. 2. Setting the Time Circuits to the late 90s My first website build In 1998, while working toward my GCSEs, I became interested in art and design, this was partly thanks to having an art teacher as my form tutor throughout secondary school. That influence, combined with the opportunity to take a double art GCSE for the same effort as a single GCSE, made the choice a pretty easy one! GCSE Art, here I come! At the same time, I was already immersed in the emerging world of the internet, spending many hours online discovering a passion for many areas of computing and online gaming thanks to QuakeWorld Team Fortress, despite the frustration it caused at home by tying up the phone line all hours of the day, oh how I loved my US Robotics 56K modem, with its 120-150 ping! Integrated Services Digital Network (ISDN) or any form of broadband was still many years away for most people! I was never exactly blessed with traditional artistic talent, painting, drawing, all of those art forms just wasn’t my thing. But I spotted an opportunity to combine my love of technology with the art curriculum. Back then, there were only about 2.4 million websites in existence worldwide. Most businesses and schools (including mine), were firmly offline. So, I proposed building a website for my final art project. To my surprise, my art teacher was absolutely thrilled with the idea. It turned out to be a first for the school and, as I later discovered, a first for the entire exam board too. Shock horror: I was ahead of the curve once. The curve has been safely ahead of me ever since. I ended up creating a website for a fake record label, complete with a dreadful album cover, fictional artist, and made-up discography. Honestly, I wish I still had it! It was gloriously awful! I don’t recall much, but I remember the site used a with three elements. The top frame displayed the logo, the left frame held the navigation menu, and the main frame was used for the page content. The logo, by the way, was crafted in a program called 3D Text Studio (or something similar to that) that churned out spectacularly cheesy animated text like this! From a web performance perspective, that single GIF exceeded 2 MB. On a 56K modem, which was the standard connection for most users of the web at the time, that translates to a 6-minute loading time for just that GIF! Fortunately, it was never hosted online and was presented to the examiners directly from my local machine. Long story short… the examiners loved my little website and I got a double A* Art GCSE for my effort! So what's all this preamble leading too? Well, this is just a long-winded way to tell you (again) that I'm old… 😭 The late 90s web landscape There have been some things I've noticed while questioning candidates in interviews recently, many candidates don't have the faintest idea of some old methodologies used in the world of Frontend, especially during the "unstable" periods of the web like the late 90s and early 00s: first browser war (1995–2001): Internet Explorer vs Netscape Navigator. second browser war (2004–2017): Internet Explorer vs Firefox vs Google Chrome. Being a Frontend Developer in the late 90s was both fun in terms of innovation, but also exceedingly stressful due to the instability of the web platform! A prime example being cross-browser development. What worked in Netscape, often looked very broken in Internet Explorer (and vice versa)! And if you had clients who were looking for "pixel perfect" designs across all browsers, you were in for a bad time! Throughout this period, a plethora of methodologies, tools, and workarounds were developed to address deficiencies in the web platform. And that’s what the rest of this post will delve into. Buckle up folks, we are about to time travel to an era when the internet started with the screeching of dial-up noises and I still had brown hair! 3. The Early Web – Layout and Design Practices Photoshop PSDs as the “single source of truth” Using Adobe Photoshop Documents (PSD) as a single source of design truth was a very common practice in the early days of web design. This was particularly common when design and development teams were siloed. A designer would create a PSD file that was intended to be precisely what the website would look like in the browser. Issues There were no considerations made for page structure, behaviour, and interactions. These fixed layout PSD's encouraged bad practices like: Fixed page dimensions e.g. 1024px x 768px as a static canvas. 1:1 mapping of Photoshop file to web page, which was rarely achievable, especially given cross-browser inconsistencies with page rendering. Lack of fluid or responsive design. I realise responsive design wasn't "a thing" at this time, but could it have been adopted sooner if fixed-width PSD workflows hadn't ever taken hold? The technique was more suited to static layouts, like print design, rather than web design. There were issues tracking interaction states like anchors with hover, active, disabled, and focus. Dynamic content was difficult to visualise (e.g., the rendering of different lengths of text in the browser). Poor accessibility adaptations, (e.g., increased font sizes, high-contrast modes weren’t considered in the design files). The only way to solve many of these issues would be to create multiple PSD's to hold all these different design assumptions. And in doing so, file management and design revisions would quickly become impractical and prone to being incomplete or inconsistent. Broken team collaboration The use of PSD's as the single source of truth broke how teams could collaborate and innovate. This was because: Developers would often have to interpret or translate the PSD design manually without the help of designers (e.g. due to siloed teams and strict job roles). Changes in the design required round-trips to designers, rather than being evolved collaboratively in code. Small team bottlenecks were common e.g. all design or development decisions needed to go through individuals rather than a whole team. Files became outdated rapidly leading to teams working on outdated designs without realising it. Designers often came up with designs that simply couldn't be built with the web technologies that existed at the time, especially when their designs were expected to work across different browsers. Modern Alternatives I'd like to think that designers using Photoshop for modern web design is a thing of the past, given the vast number of tools and techniques that are way more suited to the job than Photoshop ever was. Modern teams typically use: Design tokens and internal component libraries as the "single source of truth". Figma or similar tools with structured, token-aware components. Living style guides and code-driven prototypes (e.g., Storybook). Clear handoffs between teams using tools like zeroheight, or integrated design-to-dev platforms. The advantage of using these modern collaboration tools enables design and development teams to share the same language and source of truth, rooted in reusable, well-tested, and accessible components. Photoshop PSDs Summary In the early days of my frontend career, slicing PSDs was second nature, but that workflow is now obsolete. Using Photoshop as a "single source of truth" leads to siloed teams, rigid layouts, and poor collaboration. It ignores responsiveness, accessibility, and the realities of modern web development. Today, tools like Figma, design systems, and component libraries enable faster, more inclusive, and collaborative workflows. If you’re still building from PSDs, it’s time to move on! As the web has evolved, it is imperative that we all do the same. Frame-Based Layouts The Frame-based layouts were introduced into browsers to solve a specific set of problems. These were: To allow static content like navigation menus to remain in place while only the main content of the page gets updated on navigation. To Reduce the amount of data transferred over the network, since only one part of the page would need to be loaded. This was important at the time as remember in the late 1990s and early 2000s, broadband for most people simply wasn't available. If you were very lucky (and had the money), you'd be able to get an Integrated Services Digital Network (ISDN) line installed in your home, but it was mostly online businesses that had the money (and justification) for this type of connection, even ISDN wasn’t particularly quick. Adjusted for inflation you'd be looking at £60 to £80 per month for a 0.128 Mbps connection! To simulate a more app-like experience before JavaScript (JS) and CSS became more standardised and mature. Example For those curious here's an simple example of an HTML page using frames: Simple Frame Example Notes: To use and you needed to use a specific HTML 4.01 Frameset DOCTYPE, in the index.html file. In my example, for a single HTML page you'd have to maintain 3 HTML files (index.html, menu.html, and content.html). Each frame was like a mini browser window that loaded its own HTML document. Problems Unfortunately, there were a number of major issues with Frame-Based Layouts: Terrible user experience: the use and navigation of frames was confusing for users, since you effectively had multiple browser panes in a single page. The URL bar would often remain static even as the content of the page changed. Poor Accessibility: Screen readers and other assistive technology struggled to navigate frames, making it incredibly difficult for users with disabilities to understand the page content and overall page structure. Limited Search Engine Optimisation (SEO) compatibility: Even Search engines of the day struggled to understand the index pages built within frames. This lead to poor visibility in search results, as crawlers frequently failed to understand the relationship between the different frames. Navigation and Browser Compatibility: Because the back and forward buttons did not consistently produce the desired results, frames disrupted the navigation history, making it difficult for users to find their way around. The fact that different browser vendors weren't aligned with how frames should work in browsers lead to cross-browser issues too. Bad for security: Frames allowed for security risks like clickjacking. This is where an attacker gets a user to interact with a page that contains malicious content without the user even realising. Modern browsers now include protections to stop these types of security issues. Modern Alternatives Modern CSS Layouts: Flexbox and Grid allow for responsive layouts without compromising navigation, accessibility, and SEO. Single Page Applications (SPAs): Frameworks like React, Angular, and Vue allow developers to load page content dynamically without the need for full-page reloads. Be careful though, these libraries come with their own inherent issues if not used correctly! Server-Side Rendering and Partial Updates: techniques like server-side includes, AJAX, or component-based rendering to update portions of a page efficiently. Frame-Based Summary As mentioned in the introduction at the start of this post, my first website was built using frames! I sincerely hope you never have to maintain a frame-based website! But given the enormity of the internet, it is almost certain websites exist somewhere out there, having been untouched for decades! If you do come across one remember to take a quick peek at the source code, it's like looking back in time! They once served a purpose in the early days of the web but are now considered obsolete. Their usage introduced more problems than they solved, and have been replaced with techniques that are more performant, accessible, and maintainable. Any modern website should be using semantic HTML, CSS-based layouts, and progressive enhancement. Table-Based Layouts In the late 1990s and early 2000s table-based layouts were a common technique for building a web page structure: A simple example of what this would look like is below: Table Layout Example
My Table-Based Web Page

Welcome

This layout uses an HTML table for structure, which was common before CSS-based layouts became standard.

Why was it used? At the time CSS and layout techniques were inconsistent and unstable across browsers. Developers looking for stability in cross-browser rendering turned to tables in order to do this. At the time, tables offered: Predictable cross-browser rendering Control over alignment, spacing, and sizing Ability to nest elements in a grid-like structure It was very common to see nested tables and transparent "spacer GIFs" in invisible table cells to control these layouts more precisely. You'd often find logo's, sidebars, navigations, footer, and content areas all laid out within a deeply nested HTML table in order to achieve the layout and design that was required. Why was it so bad? The first and hopefully most obvious point is that the
element was intended for the display of tabular data. The fact that it was used as a workaround for the lack of standardised layout techniques, shows the ingenuity of developers at the time. Unfortunately, the use of tables for layout came with many considerable downsides, these included: Semantics: As mentioned, tables should represent structured data, not layout. Misusing them confuses assistive technologies and harms accessibility. Maintainability: Table-based layouts are challenging to read, modify, or scale. Small changes often require restructuring entire layouts. Responsiveness: They are rigid and not suited to fluid or responsive design, that was to come a number of years later. Performance: They delay rendering because browsers need to calculate the entire table layout before painting it to the page. Is the technique still used? There are some areas where table-based layouts may still be seen: Legacy code bases that desperately need to be refactored, I can imagine there are many internal systems across the world where table-based layouts are still used. I’d imagine the conversation about modernising goes something like this… "If it still works, why change it?". Very short-sighted I know! Table-based layouts are still widely used in emails due to the very limited support for CSS in email clients. It's not always the lack of support, it's the fact that many clients simply strip out any CSS in the process of rendering the email HTML. To give you an example of how bad it still is, from Outlook 2007+, Microsoft switched to Microsoft Word as the HTML rendering engine! And it's still in use today with Outlook 365! I did my fair share of HTML emails as a Junior Frontend Developer, the internationalised versions were the worst! Using the same table-based layouts for 19+ languages is never going to work well, especially with languages like German with their huge word length! Sorry… rant over! They are often still used in PDF generation tools e.g. data-driven print views: invoices etc. Modern alternatives Modern CSS offers clean, semantic, and powerful layout tools, including: Flexbox: One-dimensional layouts (ideal for nav bars, toolbars, etc.) CSS Grid: Two-dimensional layouts (ideal for full-page layout and complex structures) Media Queries: Enable responsiveness across devices Container Queries (still an emerging technology): Context-aware layout changes. Table-Based Summary Table-based layouts are a throwback to a bygone era, thankfully! The years of building HTML emails has scared me for life! As they were developed during a period in which CSS was inadequate for the task. Developers had to get creative to wrestle with browser quirks, and tables were the go-to workaround. Thankfully, these days, we’ve moved on to semantic HTML and proper CSS that actually does what we need (for webpages anyway). It’s cleaner, more flexible, and maintainable, and way better for accessibility. Quirks Mode Layouts This topic is covered in more detail later in the blog post, but I’ll briefly mention it here for completeness. It's important to realise that Quirks Mode Layouts weren't only limited to Internet Explorer (IE). It originated with Internet Explorer, but it was not exclusive to IE. Not only that, it later became a cross-browser convention in order to preserve the compatibility with many web pages on the internet. As that's the primary rule to consider when rolling out any new technology changes on the web. Whatever you do, "don't break the web!". For example, if a vendor released a new browser feature that wasn't backwards compatible with earlier versions of web pages, then you have a major issue as you've just broken the web! I talk about XHTML 2.0 later in the post, as it is a prime example of a proposed technology that would have broken the web. This backwards compatibility was the sole purpose of Quirks mode. It gave modern browsers the ability to switch between: Quirks Mode: Mimic pre-standards behaviour. Used for old, non-compliant pages. Standards Mode: Adheres to modern web specifications (W3C and WHATWG standards). Almost Standards Mode: The same as Standards mode only with one exception, table cell line-height rendering. This was to preserve layouts that used inline images inside HTML tables. How were layouts triggered? The browser decided which layout mode to use from the list above purely from the DOCTYPE used on the page. For example: Trigger Quirks mode This DOCTYPE will trigger Quirks mode layout: It looks valid, but it is missing the system identifier (URL) therefore it is a malformed DOCTYPE so Quirks Mode is triggered. A valid DOCTYPE is given below for comparison: That missing URL in the DOCTYPE is vital. Quirks Mode would also be activated if a page did not have a DOCTYPE or was not identical to the valid DOCTYPE given above in any way. IE even had a really nasty habit of triggering Quirks Mode if any character was output in the page source before the DOCTYPE. This included invisible characters and new lines and line returns too! As you can imagine, it made debugging issues an absolute nightmare! Almost Standards Mode The following DOCTYPE's will trigger Almost Standards Mode: HTML 4.01 Transitional (with full system identifier): HTML 4.01 Frameset (with full system identifier): XHTML 1.0 Transitional: XHTML 1.0 Frameset Standards Mode And lastly and most importantly for modern web development. This is the DOCTYPE you should be using to trigger standards mode in all modern browsers: This simplified DOCTYPE was brought in as part of the HTML5 Specification after 6 years of standardisation (2008–2014). Why was this version created? As outlined in all the examples above, previous DOCTYPE versions were: Long Error-prone Required both a public and a system identifier Affected rendering modes (Quirks, Almost Standards, Standards) In order to solve these issues this the new DOCTYPE: does not reference a Document Type Definition (DTD) as (HTML5 no longer relies on SGML-based validation). only has a single purpose: to trigger Standards Mode in all modern browsers. Quirks Mode Summary As we have discussed above, Quirks Mode wasn't an IE exclusive layout mode. It was introduced into all browsers in order to "not break the web". To ensure your website uses Standards Mode, use: And remember it must be the first characters in the source code on the page! Iframe Embeds for Layouts or Content If you've already read the Frame-based Layouts section above, then this section will be very similar. Although both are now considered legacy techniques, they come with distinct differences. Frameset As I discussed earlier here's example code: The tag completely replaced the tag and allowed developers to split the browser window into multiple, scrollable, resizable sections. Each section () loaded a separate HTML document (as seen in the code above). This technique was intended to use them as a layout structure. e.g. different parts of the User interface (UI) came from different HTML documents. Navigation in one frame would control the content in another frame. Inline frames (Iframes) These were introduced later in the HTML 4.01 Transitional specification. Example You will immediately notice the difference, using an