Regular readers will know I am both a fan of Linux and Solaris, for different reasons and, often, different solutions and environments. Back at the beginning of October I wrote this mammoth piece on my Computerworld blog: Distributions and standardization. It looks at the movement of Linux (an open source OS) towards a standardized base just at a time when OpenSolaris has been released, an OS based on standards that is now open source. There’s the potential here for OpenSolaris to have the advantage over Linux in this regard. I was asked by Computerworld to condense that piece down into an article to appear in the printed magazine, which now appears online as OpenSolaris Has a Leg Up on Linux. The latter has solicited more comments (directly by email) than the blog post, but the common thread is the same – Solaris may have an advantage, but it could be its only one. I’m not here to take sides, merely to point out the situation – I always will choose the operating system according to its target use and environment – but the OpenSolaris/Linux debate is going to be an interesting one to watch.
Welcome to Laptop Solaris, a blog looking at the use of Solaris (and OpenSolaris) on a Laptop.I’ve been a long time user of Solaris and with the announcement of OpenSolaris my interest is further piqued in the direction of Solaris as a full time, desktop, operating system. I currently use Solaris mostly for servers – up until very recently a Solaris 8 x86 box handled all of my internal networking needs. Today, I use an Ultra 60, running Solaris 10, to handle the majority of my database needs, and as a platform for all of the websites, both the internal ones and those I develop for clients.Using Solaris on the desktop and more specifically my laptop will enable me to cover many of the aspects of Solaris and help me make comparisons with my other main desktop platforms – Mac OS X, Gentoo Linux (running KDE) and Windows XP.I fully believe that with applications like StarOffice, Firefox, Thunderbird and others there should be little reasons why I can’t use Solaris as a main desktop, and portable, operating system. Only time – and this blog – will tell how successful this process is!On here, expect to see posts on some, or all, of the following topics:
- Installation of Solaris
- Daily use and experiences
- Getting software to work under OpenSolaris and laptop-specific tools
- Hardware and my experiences of Solaris/OpenSolaris on different mobile hardware
- How-tos and guides on how to get the best our of your Solaris-on-laptop experiences
- News and events on using Solaris on a laptop
That list might change in time, but the aim is to be a useful resource as well as a handy guide to what is going on.If you want to keep up to date, consider subscribing to the RSS feed or better still, subscribe to the Planet MCslp RSS feed which will include everything here and at all of my other blogs.If you have something to say or ask, please Contact Me.
Brian Proffitt has used his weekly editorial to confirm that the majority of LinuxWorld editors have moved over to LinuxToday/LinuxPlanet. You can see his editorial on LinuxToday.
After some heavy negotiations, discussions and a lot of thinking the majority of the old LinuxWorld team will be doing some work with a new organization in the form of Brian Profitt at LinuxToday/LinuxPlanet. These sites are run by the same people (Jupiter Media) who I do ApacheToday/ServerWatch material so it’s a relatively familiar home, although under slightly different circumstances ;)Kevin Bedell, Contributing Editor – Open Source Software and LicensingDee-Ann LeBlanc, Contributing Editor – Games and MultimediaJames Turner, Senior Contributing EditorIbrahim Haddad, Contributing Editor – TelecomMaria Winslow, Contributing Editor – Open Source ApplicationsMartin C. Brown, Contributing Editor – LAMP TechnologiesSteve Suehring, Contributing Editor – Security & InternetSteven Berkowitz, Contributing Editor – SciencesRob Reilly, Contributing Editor – TBDEach of us will be on the hook for one article a month, plus blog feeds through Linux Today/Linux Planet sites. You can expect the first content to start showing up around the beginning of June.I’ve also got a couple of other announcements in the pipeline. I’ll let everybody know when I have something more definite to tell.
The one thnig I hate about buying computer equipment is the timing…you can guarantee that the moment you buy something, a newer, better, and probably cheaper model will be released the day after. For me, it’s today. According to InfoWorld Seagate will be releasing 100GB 7200RPM drives and 120GB 5400RPM drives in a few weeks time. Today, after much deliberation, I ordered two Toshiba 100GB, 5400RPM drives. I thought about a 7200RPM 60GB unit, but decided that space would be more useful. Since they will replace 60GB 4200RPM units, I’m still going to be upgrading both speed and capacity. And the new Seagate drives will be expensive; my two came in at just over £200+VAT. It might have cost upwards of £400+VAT for the new units.That doesn’t make Seagate’s announcement any less annoying of course…
Not to take anything away from the recent fuss over RSS (mostly generated by the release of Tiger), but the issue is nothing new. It’s interesting to see Om Malik and Chris Holland at The Apple Blog (which I also contribute to) mention the problem, but I first recommended to developers of websites and news readers to work together and make use of technology to reduce the impact of RSS feeds. I posted the piece on 31st Jan over at my LinuxWorld blog (it’s updated today because of a formatting fix) calling for developers to start making some changes. I actually wrote it over Christmas, but we didn’t get the blogs up and running until late Jan. Unfortunately, while the current ‘we should use the HTTP headers’ idea is great, it’s not an efficient solution. There’s more work than simply identifing HTTP header codes here; it needs to be a programmatic element that makes the decisions about when to download components. If all we do is look at the HTTP headers then what happens is we download the RSS file only if the HTTP header say’s it has changed. But that could be just one 1K post in a file that is 40K in size. I know lot’s of people and systems use static RSS files because they can easily be cached by the HTTP server, but I think in time a more intelligent dynamic element would eliminate many of the problems. We need to change the way the mechanism works so even if there is a change in the feed, we’re not downloading the absolute minimum amount of information required. Hmm, I’m probably duplicating portions of that January posting, but you get the idea. Another solution which I’ve considered proposing is to actually distribute the load around, using similar techniques employed by mirrors for larger downloads. Choose a mirror of Om Malik’s blog, for example, rather than the source. There are update issues here though, and it won’t resolve the ultimate problem of users who absolutely must get the news ASAP in case they’re life crumbles beneath them, but it would eliminate probably 90% of the single-point load and traffic problem. While we’re at it, why don’t we standardize on a format too, instead of the three different RSS standards (I’m not suggesting, or recommending Atom (another RSS format), but to have four different syndication standards seems a little daft). A combination of the two – improved generation and downloading, and a simpler format – would go a long way to solving some of the headaches experienced.
I’ve really agreed with the ‘if it ain’t broke, don’t fix’ line of thinking. I think there are a number of reasons for that, but the primary one is that my world and environment is constantly changing and therefore the chances of a component or situation remaining for long enough for me to use that statement just never occurs. To give an example, just this last week we decided to move a printer downstairs for Sharon so that she doesn’t have to keep running up and down stairs when printing things out. That simple move lead to a a whole sequence of events that culminate in my re-organizing parts of my desk, which now has less stuff on it, because I’ve moved it over to the tri-level printer table. This in turn has led me to re-organize the power strips under my desk (I managed to free no less than four sockets, and improve the untidiness of the cables). That just triggered a whole kettle of fish on the location of some paperwork and my active work folders and so it goes on. That’s just my work environment. This morning I’ve been planning ways of improving the IT here. There are a couple of things which aren’t quite working how I’d like. For example, the main mail server uses post-acceptance spam filtering – ie, it gets delivered and then filtered by a Perl script (which calls SA and ClamAV), but I can’t afford to shut the mail server down while I reconfigure. Meanwhile, I’m running on a reduced firewall until I can sit down and reconfigure ISA 2004. And at the back of my mind I’m aware that I need to reconfigure some of the other hardware to fit in with a few upcoming projects. Why am I an optimization tart? Well the last time I did any of this was just about the start of the year, and I did it when I installed the kit into the new house in September too. Even now, I’m thinking about what happens next and what projects will start in September and how that might affect the current configuration set-up. And there’s a list on a few pages of improvements, extensions and new features I want to add to the Intranet. Not to mention the fact that about half of the links on that Intranet don’t work properly anyway…
I’m no stranger to grid applications, and in particular I’m on the whole all for making better use of my computer equipment, if I can, but I think there’s a slight misconception about how these applications work and operate. In general, most people – especially inside offices – leave their computers on all day and all night, which means they are using up electricity. If each worker works for 8 hours a day on their computer, that’s 16 hours in which the computer is not being utilized to it’s full extent; ergo ‘wasted cycles’. The computer is sat there, largely idle, perhaps occasionally checking up your inbox or refreshing your web pages.
Working from home is seen as the utopia for many (certainly I enjoy it), but I’m also aware that providing users with the technology and the capability to let them do their job effectively is just as difficult. In the bad old days we used modems and dial-up through to special phone numbers, useing hardware and software on our own servers. Today, with broadband things are easier, if your company has a network connection and you can get broadband then connectivity between the two is easy – firewalls and VPNs aside. But how about the integration with your environment at the office. I don’t mean cubicles and vending machines, but the idea of having access to your desktop, email, and servers. Many use notebooks, but these have their own problems, including the fact that they need to lugged about (which not everybody likes), and it becomes a high-value item which is easily stolen. Notebooks are also a relatively high maintenance item. A good IT department will expect to keep the machines up to date, run regular preventative maintenance and back up the contents if relevant to ensure that no documents are lost in the event of failure. All of this is complex enough on a desktop environment, but with a notebook that isn’t always left in the office overnight, it becomes a complete nightmare. I know. I’ve been there. Add into this fact that most users just don’t use it as a notebook – they pick it up from their desk at work, take it to their deskt at home and then reverse the process – and you see that a relatively high-cost, maintenance requiring item that is not actually used in it’s intended environment and the notebook idea looks unattractive.