In the last LinuxWorld article I wrote for the magazine I talked about FOSS anniversaries, mostly because a number of important projects turned into double figures, and yet most people let it pass them by. Talk to young programmers and developers …
In the last LinuxWorld article I wrote for the magazine I talked about FOSS anniversaries, mostly because a number of important projects turned into double figures, and yet most people let it pass them by.
Talk to young programmers and developers today and you’d be fooled into thinking that free/open source software (FOSS) was a relatively new invention. Those crusty old folk among us (myself included, born in that prehistoric era of the early ’70s) know that it goes back a little further than that.Many of us become dewy-eyed about our memories of Linux when it first came out – or the first Red Hat release. In fact, many of the FOSS projects that we take for granted today are a heck of a lot of older than people realize.
And my final request:
To try and redress the balance I’m starting a FOSS anniversaries project. Initially it’s going to be held on my personal blog at http://mcslp.com – click on the FOSS Anniversaries link to go to the page. If I get enough interest, I’ll consider improving on it and moving it elsewhere. Until then, if you’ve got some additions or corrections, use the contact form to let me know.
Here is the FOSS Anniversaries page, which is on this site. If you want me to update anything, use the Contact page.
After some heavy negotiations, discussions and a lot of thinking the majority of the old LinuxWorld team will be doing some work with a new organization in the form of Brian Profitt at LinuxToday/LinuxPlanet. These sites are run by the same people…
After some heavy negotiations, discussions and a lot of thinking the majority of the old LinuxWorld team will be doing some work with a new organization in the form of Brian Profitt at LinuxToday/LinuxPlanet. These sites are run by the same people (Jupiter Media) who I do ApacheToday/ServerWatch material so it’s a relatively familiar home, although under slightly different circumstances ;)Kevin Bedell, Contributing Editor – Open Source Software and LicensingDee-Ann LeBlanc, Contributing Editor – Games and MultimediaJames Turner, Senior Contributing EditorIbrahim Haddad, Contributing Editor – TelecomMaria Winslow, Contributing Editor – Open Source ApplicationsMartin C. Brown, Contributing Editor – LAMP TechnologiesSteve Suehring, Contributing Editor – Security & InternetSteven Berkowitz, Contributing Editor – SciencesRob Reilly, Contributing Editor – TBDEach of us will be on the hook for one article a month, plus blog feeds through Linux Today/Linux Planet sites. You can expect the first content to start showing up around the beginning of June.I’ve also got a couple of other announcements in the pipeline. I’ll let everybody know when I have something more definite to tell.
If you have completely and utterly missed all the fuss over the last week, then you won’t know about the problems at LinuxWorld and more specifically Sys-Con regarding the publishing (and endorsing) of an a particular article. I wont link to it, f…
If you have completely and utterly missed all the fuss over the last week, then you won’t know about the problems at LinuxWorld and more specifically Sys-Con regarding the publishing (and endorsing) of an a particular article. I wont link to it, for obvious reasons, nor will I link to all the other articles and inforation that has been generated in response.I do this with a significant amount of sadness – I’ve enjoyed working on the magazine immensely. I was there at the start, when LinuxWorld Magazine was going to be called Linux Business and Technology magazine, and I’ve had the honour to work with some wonderful people and with some wonderful companies for things like the Sun hardware review, and the numerous publishers and other organizations. However, I don’t feel that I can work within an organization that operates in the way Sys-Con does anymore. We, as editors, have been unhappy for some time, and some of you will remember the problems over the new year from a familiar source, and my ‘letter from the editors’ explaining the situation. We have no control over the website; even the new one, which went live recently, is completely out of our control. Many people don’t understand how this can be the case – even with the recent issues, many assume we have full and absolute control over content on the website. This simply wasn’t the case. Instead the LinuxWorld.com website is an automatic amalgam of articles and posts from across Sys-Con that may, or may not, be Linux related. Our only direct way into providing content for our site was through our also recently enabled blogs (http://mc.linuxworld.com). We have no control over the articles automatically added and syndicated on the site. The first time we see them is the first time you see them. Yes, it’s odd. No, we didn’t like it. Yes, I’ve said that before. We have, fortunately, always had control over the print magazine; at least in terms of the articles and editorial content. But advertising on both mediums has always been a Sys-Con controlled element. I don’t have a problem with that, but it did create issues when our site – heavily Linux focused – started to include Microsoft ads which were actually part of the ad revenue for another magazine in the Sys-Con stable. There were also other, minor, niggles. For example, the internal editors mailing list, used by us editors to discuss content and distribute press releases and contact information, died in November last year. After two months of inactivity I eventually started hosting the mailing list because it become intolerable to work in any other way. Then, late last month, we lost all our LinuxWorld domain email addresses. Even now, they are not working. I have and always will have difficulty understanding how IT companies can often have such bad IT departments that it takes weeks and months to resolve trivial items. Finally – and this was a bigger issue for some editors – nobody at the editorial level, or anybody who wrote articles for the print magazine – got paid. We all knew that when we started, but when you devote time and energy to a project, even for free, you expect things to run slightly smoother. Of course, what particularly sticks in the throat is that some contributors to the Sys-Con content system were getting paid. And that just makes the whole situation even more maddening; the people who write bad articles get paid for it, the people who slug their guts out following ethics and common sense do it for free. Guess who get’s the brown stuff when it all goes wrong. Anyway, after much deliberation I decided to leave. I have better things to do with my time than devote it to free, unpaid projects run by companies whose only interest is in the money they make from ad impressions from inflammatory articles. My main LinuxWorld blog, over at mc.linuxwold.com, will die. The content on there (such that it is) will be removed and migrated here. The books blog (books.linuxworld.com) will be migrated to a new site. Keep your eyes peeled for an announcement about that shortly.
Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation. Fudging tests to make your company or product look better is not new, but there are certain companies and organizations which seem to have a better …
Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation.Fudging tests to make your company or product look better is not new, but there are certain companies and organizations which seem to have a better reputation for it than others. The latest Get the Facts document is a perfect example of how to bend the rules.
First, some good things about the tests:
- They used the same hardware
- They had a standard set of tests
- They used a standard set of clients – individual – rather than some testing mechanism saturating a handful of machines
- They provide full details of how they did everything
Unfortunately, this last item is exactly what highlights the problems with the comparisons. Here are the main points of failure in their testing methodology:
- They use an old version of Redhat Advanced Server and Redhat Professional. We’re talking old kernel code here, something which (especially the new threading model) is going to make a significant difference. By comparison, they are using Windows Server 2003 RC2, not a final release, to be fair, but with a completely different set of features compared to the Linux OS they used; there’s not a huge difference, functionality and performance wise, between RC2 and the current release. There’s a heck of a difference between RHEL 2.1 and the current release.
- They use different SSL standards on the two boxes. For Windows they use one of the simplest, and least processor intensive options. On the Linux box, they use a more complex and more CPU intensive algorithm.
- They compare CGI performance on Apache against ISAPI performance on IIS. They also compare CGI on both, but comparing ISAPI to CGI is unfair. It’s not that Apache doesn’t have the functionality available to host an equivalent, such as mod_perl, mod_python or even PHP. Better would have been to tested the PHP ISAPI in IIS against the PHP module in Apache.
- They made some interesting tweaks. Many of the Windows and Linux configuration tweaks are pretty much identical. Some of them, though, are different but the casual observer would think they were identical. For example, under Windows they say ‘Set HKLMSystemCurrentControlSetControlFileSystemNtfsDisableLastAccess to 1.’ and within Linux/Apache they say ‘Disabled Access Logging’. For those not familiar with Windows, the registry tweak they mention disables last access time updates on the filesystem. What they did on Linux was disabled Web server logging within Apache. What they didn’t disable filesystem access time updating (which they could have done with a simple noaccess option when mounting the fs). Access time updating is a known performance hog and one which most Linux performance tuners would disable merely out of habit.
As always, what is sad is that many people will read this document and choose Windows without a second thought. They won’t even check the facts – not even a simple reconciliation of version numbers. Windows Server 2003 – even with SP1 – is still referred to as such, but Linux, even commercial versions, are constantly evolving and improving and there’s a big difference between RHEL 2.1 and RHEL 3.0.
Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation. When I did the review of the Sun v40z, I said at the time that it was a fast machine, whether it was running Solaris or Linux. Now some more officia…
Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation.When I did the review of the Sun v40z, I said at the time that it was a fast machine, whether it was running Solaris or Linux. Now some more official benchmarks have come out, and it looks like the Sun v40z is an exceedingly fast little box. Some of these performance stats are amazing, most world beating for an x86 machine. I’m just glad I got to try one out. Even better, some of the stats you see use the new dual-core Opterons, giving you an 8-way SMP capable box in the same footprint as the old 4-way box. To quote from FP performance report:
On the compute intensive industry-standard SPEC CPU2000 benchmark, the Sun Fire V40z server has achieved SPECfp_rate2000 result of 138, setting a new world record for all 8-way x86-compatible systems, as of April 21, 2005. This record outshines the previous top score of 41.1, which was set by the Intel Xeon MP-based HP ProLiant DL740 server, by over 3x. The enhanced 4-socket server, equipped with the dual-core Opteron processors, demonstrates more than double the performance of the single-core 4-socket competitive servers outfitted with the newest Intel Xeon MP EM64T-capable family of processors. Specifically, Sun Fire V40z server tops the performance of HP ProLiant ML570 G3 and Dell PowerEdge 6850 servers (52.6 and 52.5 respectively) by over 2.5x on the floating point throughput test.
This fits in line with the press release announcing support for the dual-core Opteron support. OK, the stats are Solaris, not Linux, based, but this box is just as capable of running Linux (I tried Fedora Core 3 x64 without any issues for over a month). I can’t see any reason why Linux, Fedora or otherwise, couldn’t achieve similar results if given the opportunity. Now all I need to do is find the money to buy one…
OK, so it takes some time – this is old hardware – but I’m happy to say that the new mail server, running Gentoo on SPARC, is running perfectly. Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation. It’…
OK, so it takes some time – this is old hardware – but I’m happy to say that the new mail server, running Gentoo on SPARC, is running perfectly.Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation.It’s been up now for almost 72 hours, and I’ve been filtering spam for the last 48 hours with only about 8 making it through (out of 190), compared to the 20-30 that would make it through on the old system.The config is simple, Constable (Gentoo on SPARC) accepts all the email through postfix, then I use Amavisd-new in combination with SpamAssassin amd ClamAV to capture everything. This re-injects to postfix, which then forwards it on to the real mail server, Gendarme (Solaris 8 x86). On this box I use Sendmail in combination with Cyrus and use sieve to do some additional filtering, which gets rid of about another 20 emails that get through by ensuring they were actually sent to me, contain the right headers, and a few other simple tests.Setting up a spam filterering machine is nothing new of source, but it’s nice to see that with Gentoo everything is very easy. A couple of emerge commands to install the various bits and I’m up and running. Well, on this old machine after a significant wait of course; it took the best part of two days to compule and install the OS and then the required software. But what is easy is that the dependencies are sorted out for you. For example, installing amavisd-new required quite a few packages, and even installing something seemingly straightforward like NFS required a couple of packages I didn’t already have.In comparison to the pain you can gave even installing the pre-packaged RPMs this is a dream. With Gentoo, you also get the ability to be a bit more specific about some of the options.There are some complaints though; the current mysql packages (on SPARC) are not new enough to run with some of the latest, standard versions. I can’t connect to my MySQL server, which is running on a Windows Server 2003 box because the security support doesn’t match; it’s a problem I’ve experienced before, but with Gentoo I really don’t want to spol the ability to update to the latest versions by running ’emerge world’.For the moment though, I’m happy. I should spend about half an hour less sorting through the spam.
Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation. In this piece over at New Scientist is the news that some of us have been expecting for a while – Phishing has moved on from simply redirecting peop…
Note: This post was originally part of my LinuxWorld blog; now migrated here after my resignation.In this piece over at New Scientist is the news that some of us have been expecting for a while – Phishing has moved on from simply redirecting people through spam emails, no hackers are going to do it by polluting the DNS namespace. If you haven’t already, make sure you’ve upgraded to the latest BIND (9.x) and then used the features like DNSSEC to ensure that the information is distributed about properly.