Development Cycle at Apple and Microsoft

One of the attributes of a good software development process is consistently releasing software. With this in mind, I wanted to compare the development cycles from Microsoft and Apple when it comes to their Operating Systems. I only compared the client operating systems and not the server versions.

OS X:
10.0 — March 2001
10.1 — Sept 2001
10.2 — Sept 2002
10.3 — Oct 2003
10.4 — April 2005

Windows:
NT 4.0 — July 1996
Windows 2000 — February 2000
XP — Oct 2001
LongHorn — 2006 (estimate)

It is interesting to note that Apple has consistently produced incremental improvements to their operating system with about one release per year. (They say that starting with 10.4 they are going to slow down this pace, I’m guessing they will release a new version every 2 years.)

I am curious about the size of the development teams. I would guess that Apple’s is probably smaller, but I can’t find any actual numbers. Apple doesn’t worry as much about backwards compatibility. If a developer wrote software that takes advantage of a nonstandard API, Apple doesn’t care if it stops working in a later release. Microsoft on the other hand does extensive testing to make sure that older software will work with each new operating system. This may give Apple something of an advantage because they don’t have to support a large set of complicated “bugs” in order to make poorly written software work as expected.

The foundation of OS X is NextSTEP which was designed by some of the best Object Oriented programmers back in the 80’s. If Apple does indeed have a smaller developer team, I wonder if the base of software they are working with gives them some type of advantage when it comes to consistently delivering new versions. I’m sure both companies are making use of OO design, but if Apple strives for the same type of elegance and simplicity in their code as they do in their hardware, they may have a significant advantage.

When Microsoft comes out with a new release, it usually involves years of coding. The upgrade from Windows 200 to Windows XP was an incremental change. Most of XP seems to be based on Windows 2000 and the changes seem to be more cosmetic than anything else. I believe they made some significant changes under the hood (especially with regard to graphics), but for most uses XP just has a different (possibly better looking) interface. If you have a computer with Windows 2000, it is hard to justify upgrading unless you have a specific application that requires XP (video editing for example). Because of this, I think Microsoft may be trying create new versions of the operating system that are less incremental. If the changes are revolutionary then more people may want to buy it for the new features.

From the client side of things, there isn’t really any killer application that is making people go out and buy new computers. Video editing is probably one of the few applications that really requires today’s top end hardware. If Longhorn is going to be significantly slower on today’s hardware than XP or even Windows 2000, what is the incentive to upgrade? Microsoft is going to make an operating system that is so innovative that people will be compelled to upgrade. They did this when Windows 2000 came out. The majority of the upgrades to Windows 2000 weren’t because of the stunning new features. People upgraded because it was more stable and easier to use. By easier to use, I’m not referring to the cosmetic changes to the interface. I’m referring to the fact that it was much simpler to do things that we now take for granted like setup networking.

Microsoft is going to have a difficult time of convincing people to upgrade to Longhorn unless they deliver a product that is much more stable and much easier to use. Right now it looks like they have a lot of cool features (transparent windows, etc), but it will take a lot of work to turn these features into actual usability improvements.

Apple has been following the incremental improvement path. The differences between 10.2 and 10.3 weren’t earth shattering, but 10.3 runs enough faster and has just enough improvements to make it worth the $129 upgrade price. In general you have 2 options to speed up a mac, buy a new computer or buy the newest operating system and perhaps some additional RAM. There is a lot of incentive to upgrade your OS if you know it will make your computer run faster. I don’t how long they can keep this up, but I’ve heard reports of people using the beta version of Tiger that won’t go back to 10.3 because of the increase in speed.

In the long run, it is hard to say which strategy will win. But if Apple continues to produce incremental upgrades every 1 to 2 years and Microsoft continues to produce radical changes every 4 to 6 years (with a possible incremental improvement in between), I predict that Apple will continue to erode Microsoft’s market share. With a shorter release schedule, Apple will be more prepared to deal with changes to the market. They will also get feed back more quickly. If the market doesn’t like 10.4 Apple can quickly correct for this in version 10.5. On a 4 to 6 year development cycle this is much more difficult to do.

Virtual Private Linux Servers

There used to be two choices for web hosting. You could get a dedicated server for several hundred dollars each month. This would give you complete control of your machine letting you schedule automatic jobs to run, upgrade packages, etc. Or you could share a server with a bunch of other people. This would keep your expenses low (sometimes under $10 per month or even free), but you were restricted to basically just uploading static pages or PHP.

There is some software out there called user mode Linux that lets you create virtual machines on one physical box. This means hosting companies can put in one server and share it among several users. The users get complete control (including root access) at low prices. For people who want to host small to medium sites, this is perfect. They still get complete control and shell access, but they don’t have to pay for an entire machine.

  • Easy Co — Currently I’m hosting www.markwshead.com at EasyCo. They have good service and telephone tech support. I pay about $15 per month for their base level package.
  • Redwood Virtual — I host blog.markwshead.com with Redwood. They don’t have telephone support, but their prices are even cheaper. It is only $8.33 per month if you pay for a year upfront. They recently added an interface that allows you to reboot your system if it gets hung, so this makes the telephone support less of an issue. It will also let you reinstall everything back to the original settings which can be nice if something gets terribly messed up.
  • Open Hosting — I just ran across this company the other day. Instead of limiting your virtual machine to the resources you’ve paid for, they will give you a base package and then charge you for the extra usage at the end of the month. If you are like me where your machines sit idle or just serving http 95% of the time, this may be a good way to get a lot more power while still keeping costs down. Currently with Redwood and Easy Co, I’m running into limitations because of the amount of RAM I’m paying for. I can’t run some of the tools I need, but it is hard to justify upgrading to the next level when I only need to run the tools once or twice a month. Right now it is cheaper to do it offline and upload the results. A setup like Open Hosting might work very well because I’d have the extra resources when I needed them.

Why Google will Buy Amazon.

While I don’t anticipate Amazon selling out to Google anytime soon, much of the work done at Google is being duplicated by Amazon and vice versa. Google’s mission is to organize all of the content in the world and make it easy to find. This basically what Amazon has done for shopping. As both companies expand they are going to find themselves doing more and more work that is similiar–even if their end products are very different.

Here are a few examples of areas where there may be overlap:

  • Restaurant Menu’s — This seems like something in Google’s domain, but Amazon is the one implementing this.
  • Locate a Taxi — Given that Amazon is doing restaurant menus, this seems like it would be something similar, but Google is doing this one.
  • Search inside books — This seems like a perfect match for Amazon, but both Google and Amazon are providing this service. Google is currently working on scanning in Harvard’s library so the books show up in their search results. They will only let you view a few pages due to copyright issues.
  • Website Traffic Rankings — Amazon is providing this service through Alexa. The data is coming from Google though.
  • Directory of Websites — Amazon and Google both provide this, but they both pull their information from dmoz.org.
  • Access to Scholarly Papers — Google is doing this through Google Scholar. Most of the time it gives you links to websites where you can buy or subscribe to the information. However if you are part of a university that is working with Google, they can pass you right through to the information without needing to go back to your university library logon.

Much work being done at both companies is similiar. Both Amazon and Google are scanning in books, providing a way to search the book, and presenting the information in a way that protects copyrights. Both companies are trying to provide better ways of categorizing information on the web. Both companies gather information about movies. It seems like only a matter of time before someone realizes that a good portion of the “grunt” work being done at both companies could be done once and used in both places.

Why Java Won’t Get It Right

Why Java Won’t Get It Right is an interesting entry about some of the problems with Java technology. The best part is that it is written by someone who actually knows Java. A part that I particularly liked was:

They over-architect everything. I’ve actually used a Java framework (I’m not gonna say which) that had XML config files that configured more XML config files! That’s just silly.

The author makes comparisons to Ruby on Rails and talks about how he doesn’t think Java will ever have anything like Rails.

I’ve seen a few demos of Rails and it is impressive, but much of the functionality it gives you has been available in WebObjects for some time. In fact I’ve met several Ruby developers that started with Rails and switched to WebObjects as their application got bigger. (Update: It turns out I was mistaken. They switched from Ruby to Webobjects, but they were using a different web framework instead of Rails.)

There is an interesting comparison between a Ruby project and a Java project posted on the Ruby on Rails site. The code comparison is interesting because it shows how much Ruby does for you automatically if you know how to use it. A lot of what Ruby is doing is giving you automatic setters and getters.

It would be interesting to see a comparison between the amount of code necessary to write a Ruby application and the same app in WebObjects, but when it comes down to actual productivity the language being used is rarely the bottleneck. The skills of the programmer are by far the most important factor. The tools available in the language are second and the language ranks third or lower.

Good tools have a huge impact on productivity. Simple things like auto-complete and real time syntax checking cumulatively make a large difference in productivity. One of the areas where WebObjects really shines is in giving you the ability to graphically connect your data with the view. You can still do everything manually in code, but the graphical tools give you the ability to really think about the problem on a level that is much closer to the user experience.

Thread.sleep() problem

The following is a JUnit test that looks like it should always run without a problem. Mark the current time in a variable called start call Thread.sleep and tell it to sleep for x number of seconds, note the current time again in a variable called end and then assert that end - start is going to be less than or equal to x.


    public void testThreadSleep() throws Exception {
        long start = 0;
        long end = 0;
        long elapsedTime = 0;
        for(int i = 1000; i < 1500 ; i = i + 20){
            start = System.currentTimeMillis();
            Thread.sleep(i);
            end = System.currentTimeMillis();
            assertTrue(i <= end - start);
        }
    }

However in acutally running this code the assertion is not always true. It appears that when you try to call Thread.sleep(x) it may not sleep for the entire x milliseconds. Obviously it might take longer than x because a thread isn’t guaranteed to run. There might be another thread with a higher priority or the system might be doing garbage collection. However I would expect that it wouldn’t run less than the specified amount of time, but that is what appears to be happening.

I believe this has to do with the way that the JVM operates. Evidently it may wake up a thread a few milliseconds before the appointed time. It is possible that it may be anticipating garbage collection and waking threads up slightly early