Shock, Horror, RSS Loading Issues

Posted by

Not to take anything away from the recent fuss over RSS (mostly generated by the release of Tiger), but the issue is nothing new. It’s interesting to see Om Malik and Chris Holland at The Apple Blog (which I also contribute to) mention the problem, but I first recommended to developers of websites and news readers to work together and make use of technology to reduce the impact of RSS feeds. I posted the piece on 31st Jan over at my LinuxWorld blog (it’s updated today because of a formatting fix) calling for developers to start making some changes. I actually wrote it over Christmas, but we didn’t get the blogs up and running until late Jan. Unfortunately, while the current ‘we should use the HTTP headers’ idea is great, it’s not an efficient solution. There’s more work than simply identifing HTTP header codes here; it needs to be a programmatic element that makes the decisions about when to download components. If all we do is look at the HTTP headers then what happens is we download the RSS file only if the HTTP header say’s it has changed. But that could be just one 1K post in a file that is 40K in size. I know lot’s of people and systems use static RSS files because they can easily be cached by the HTTP server, but I think in time a more intelligent dynamic element would eliminate many of the problems. We need to change the way the mechanism works so even if there is a change in the feed, we’re not downloading the absolute minimum amount of information required. Hmm, I’m probably duplicating portions of that January posting, but you get the idea. Another solution which I’ve considered proposing is to actually distribute the load around, using similar techniques employed by mirrors for larger downloads. Choose a mirror of Om Malik’s blog, for example, rather than the source. There are update issues here though, and it won’t resolve the ultimate problem of users who absolutely must get the news ASAP in case they’re life crumbles beneath them, but it would eliminate probably 90% of the single-point load and traffic problem. While we’re at it, why don’t we standardize on a format too, instead of the three different RSS standards (I’m not suggesting, or recommending Atom (another RSS format), but to have four different syndication standards seems a little daft). A combination of the two – improved generation and downloading, and a simpler format – would go a long way to solving some of the headaches experienced.