CSS munging – a FAILed experiment

October 3rd, 2009. Tagged: CSS, performance

Not sure if I've ever put that in writing, but CSS irks me with its verboseness. I mean things like: background-position, padding-bottom, text-decoration... those are long property names, repeatedly used in stylesheets. And there's pretty much nothing you can do about them in terms of minification, it's not like they are variables in JavaScript which your YUIcompressor can rename to, like, a, b and c, respectively.

Or can you do something about it? Here's the big idea:

the big idea

I though of using JavaScript arrays to store each CSS property only once, and also each value only once. Then have another array with selectors and a way to match selector to indices in the property/value arrays. A simple loop in JavaScript can reconstruct the stylesheet. Simple, right? And avoids all these verbose repetitions.

Drawback - breaking the separation of concerns. Relying on behavior (JS) in order to get presentation (CSS). If someone has disabled JS they get no styles. That's a big no-no, but there's one case where you can safely break the separation - in lazy-loaded functionality. Here's what I mean...

Combining lazy-loaded assets

Page loads ok in its basic form - styles and all. Then it gets progressively enhanced with JavaScript. JS adds new features. If you have JS OFF, you don't get them. But if you do, it makes sense to have the feature atomic - one file containing both JS and CSS. This way saving HTTP requests (rule #1 for faster pages). That's a pretty cool idea in itself and you can find implementations in the wild, including high-traffic sites such like Google search and Yahoo homepage.

Ok, with the SoC out of the way, what about that failed experiment?

A failed experiment

(psst, demo here)

You start with something like:

#test {
    border: 1px solid blue;
    font-family: Verdana;
} 
a {
    padding: 0;
    font-family: Arial;
}

Then (all that during build time, not run time!) you have a parser that will walk over the CSS and "understand it" into an array, like:

[
    {
        selector: '#test',
        rules: [
            {
                property: 'border',
                value   : '1px solid blue'
            }
        ]
    },
    
]

Then from that structure you produce 4 arrays that contain:

  1. all selectors,
  2. unique properties (sorted),
  3. unique values (sorted) and
  4. a map that connects selectors to their properties and values.
s:['#test','a'], // s-selectors
p:['border','font-family','padding'], // p-properties
v:['1px solid blue','Verdana',0,'Arial'], // v-values
r:[[0,0,1,1],[2,2,1,3]] // r-map or rules

Finally, your build process produces a JavaScript function that reconstruct the CSS string, which you can then shove into a style tag and be done with it.

(function(o,s,i,j){
s='';
for(i=0;i<o.s.length;i++){
  s+=o.s[i]+'{';
  for(j=0;j<o.r[i].length;j=j+2){
    s+=o.p[o.r[i][j]]+':';
    s+=o.v[o.r[i][j+1]]+';';
  }
  s+='}';
}
return s;
})({
s:['#test','a'],
p:['border','font-family','padding'],
v:['1px solid blue','Verdana',0,'Arial'],
r:[[0,0,1,1],[2,2,1,3]]
})

Results

The demo is here, you can paste your CSS and see what the results are for your styles. Here's what I found:

  • Really tiny stylesheets don't make sense, because the overhead exceeds any savings
  • Otherwise it helps! You get smaller CSS, yey! Champagne! Caviar! Everybody dance!
  • But... after gzipping the result, the AFTER is actually bigger than BEFORE. Cold shower. The hole thing goes down the drain. C'est la vie, gzip does a better job than I can do with JS arrays.

Here's the results of munging one 16K gzipped CSS found on the Yahoo! homepage;

Raw:
  source: 95735
  result: 87300
    percent: 8.81077975662%
Gzipped:
  source: 16211
  result: 16730
    percent: -3.20152982543%

Tell your friends about this post on Facebook and Twitter

Sorry, comments disabled and hidden due to excessive spam.

Meanwhile, hit me up on twitter @stoyanstefanov