Posts Tagged ‘ analysis

Windows vs Linux – Stability and Predictability

Disclaimer: I wrote this in a single, several hour long session without doing any editing or research. These are simply opinions that I felt like shouting out to the world. If you disagree, agree, or have suggestions on expanding this, feel free to comment below.


I’ve been thinking about why it is so hard to convince people to move to or even just try Linux whereas people are much more open to the idea of using OS X. Of course there are the usual suspects like gaming. But with Steam arriving on Linux, there are almost no games that my friends play that isn’t playable on Linux. Or drivers, but that has been a slowly improving situation. So long as you pick certified hardware, then this isn’t an issue. Then you have marketing creating misconceptions, but quick demos and a recommendation from the guy who fixes all their computers easily dismisses those incorrect notions. And of course there is the “I don’t want to reinstall everything” excuse. However, I’m only suggesting this to people getting new hardware and simply suggesting they try it out on a Live CD or VM (which I will setup). And yet, it is still very hard to change people’s minds. So what is it that Linux lacks or what scares people away from even trying Linux? I think it all comes down to stability, predictability, and support.


Let’s first look at stability. No, not stability in the sense of the computer crashing (BSOD anyone?) but rather in the sense of environment stability. In all major, popular versions of Windows (95, XP, 7, and perhaps 8 in the future), the desktop environment (I’m including the window manager, file manager, and absolute basics in here.) has been rock solid stable. There are very few bugs noticeable to the normal user. Windows itself may crash but Windows Explorer has been one of the least buggy interfaces I’ve ever used. Compare this with the primary default DEs in major Linux distros – GNOME, KDE, Unity, and Cinnamon. Also look at some other non-scary to Windows users DEs – XFCE, LXDE, Pantheon (Elementary OS’s DE). Out of all of these distros, how many have been the paragon of stability over the past 5, 10, 15 years?

GNOME 2.x – As much as I loved GNOME 2.x (with or without Compiz), it wasn’t exactly bug free. It was stable by time I started using it but I still ran into a lot of bugs that made me think, wtf? While it outpaced Windows Explorer in features, those new features often brought with it endless amount of annoying bugs.

GNOME 3.x – I like the idea of GNOME Shell but have been adamant since release that it was released too early. 3.0 should have been an Alpha test and they should have only moved this to a Beta after a few releases with extensions in place. Even now, they aren’t anywhere near RC state. For anyone who wants stability, I would steer clear of GNOME Shell. Too many bugs and too much jumping before thinking with how they handled extensions (who here has out of date extensions after upgrading?).

KDE 3.x – I wasn’t around for this so no comment here. However, the fact that it was dumped for a complete rewrite in KDE 4.x will be covered in my support section.

KDE 4.x – I tried this out when it was new and damn was that shit buggy. I’ve tried it on an off for years and I will say that it is fairly stable now. However, it does have performance issues and the sane defaults set by most major distros don’t make it standout from Windows Explorer because Windows 7 copied features from KDE (and did it without stability issues to boot). For KDE 4.x to really shine, you need distros to thoroughly customize KDE to vastly outshine Windows 7. This is possible but I have never seen any distro come out of the box with 1) well designed eye-candy settings that mesh super well like in Pantheon and 2) an easy to understand walkthrough of how to use the DE features in KDE. I’ve attempted doing both but for someone not well versed in KDE, the plethora of settings is just too overwhelming to wade through and create a polished DE.

Unity – I will admit, I like Unity because it is setup similar to how I use most of my DEs (launcher on side). However, this has the same problems as GNOME Shell in that it is still too buggy to use or suggest.

Cinnamon – I like the direction this is going but there is still a TON of polish and basic bugs that need to be ironed out before I would suggest this to someone coming from Windows. Hell, the text cursor in the menu still doesn’t blink and we are already at version 1.6. What is up with that?

XFCE – Version 4.10 finally caught up with Windows 7 (aero-snap) and Thunar 1.6 finally had them outshine Windows Explorer with tab support. However, as a full-time XFCE user, I will say that this environment is not 100% better than Windows 7. It is about even on features and stability. So telling friends to try out XFCE isn’t going to convince them to change.

LXDE – Same boat as XFCE but fits a slightly different niche. I wouldn’t except using this to convince anyone to use Linux over Windows.

Pantheon – The most interesting of the new DEs that I have been following since inception. It is still too new though and only recently went to Beta status, so again I wouldn’t suggest this to people who want a stable DE.

Looking at all of these DEs, it is clear that Linux is on the front of innovation. The problem with that is that too many bugs are introduced that screw with the user experience. If you pick a stable DE, then you end up with a situation where Linux is simply on par with Windows. And finally, the last piece of the puzzle – OS X. This DE takes a middle ground between Windows and Linux – it slightly more innovative than Windows, much more polished than Linux, and crazy stable like Windows. While it does require a paradigm shift to switch to OS X, users are content knowing that things will work as they expect. This is what will be needed by Linux DEs to convince users to switch away from Windows.


While innovation is great, it is clear that stability is more important. What innovation that does get introduced though, must be predictable. Both Windows and OS X only introduce new features that have been thoroughly tested by focus groups and tweaked to seamlessly integrate with the current DE. Innovation in Linux is like the wild wild west. New stuff is being implemented all the time but you have no idea where things are headed. In fact, it is often the case that Windows and OS X simply take the most popular new features in Linux, test it into the ground with focus groups, and then implement it in a super stable way months/years down the road. They are basically Debian but much faster. To give an example of what I mean, let’s walk through the DEs again.

GNOME 2.x – Very predictable. Nice! But it is no longer supported which leads to my third issue discussed later.

GNOME 3.x – Very unpredictable. Boo! In fact, this is the least predictable of all DEs I have ever encountered in my life. With each release, I have no idea what will change and what muscle memory I will need to retrain to use this DE.

KDE 3.x – Same comment as above.

KDE 4.x – Very predictable. Nice! But there is no tutorial to teach you what everything in the DE does. Seriously, I’ve looked everywhere for one and the information out there is simply too sparse and contains too few examples for anyone to learn KDE without trying everything out themselves.

Unity – Sort of predictable but Cannonical has shown that they aren’t afraid of making fairly big changes to the GUI (moving the min/max/close buttons anyone?). Since this DE is still fairly new, I would be afraid of suggesting it to anyone for fear that other big changes are in store.

Cinnamon, XFCE, LXDE, Pantheon – All very predictable. Nice! In fact, the reason why they are predictable is because it is easy to be predictable when a DE focuses on simplicity and getting a few things right at a time.

So we have a few candidates here which can be both innovative and predictable. However, they all fail the stability test above which is fairly important. They are also fairly young in most cases and that leads to my final problem.


Let’s first get this out of the way. Microsoft is clearly not the paragon of support. Neither are the Linux Communities with the differing personalities leading the various software projects. I don’t know about Apple so I won’t comment on them. What I mean by support is expected longevity and ease of upgrade.

Windows generally doesn’t upgrade all that often and when it does, they are major upgrades that people expect to be a painful upgrade process (e.g., anything dealing with reinstallation). What they do well is that their OSes don’t have many major releases. Instead, you just continually get updates for as long as Windows is supported, which for Windows is a VERY long time. Hell, my older brother still uses Windows XP. Combined with how software updates don’t just stop on Windows, you can technically have the most recent version of all your software at all times on supported versions of Windows.

Apple, from what I understand, is an easy to upgrade system. In fact, I believe upgrading major versions is less painful than on Windows 7 and their support in helping you migrate is top notch. They are (fairly close to?) the ideal half-rolling release model.

Then there is Linux. You have a choice of normal release models (Ubuntu, Debian stable, Fedora, etc.) similar to Windows, half-rolling release models (Debian Testing, Chakra Linux, OpenSUSE Tumbleweed, etc.) similar to OS X, and rolling release models (Arch, Gentoo, etc.). For each of these major models, I will break down the issues users have with Linux:

Normal Release Model – Updates are either too frequent (Ubuntu, Fedora) or too slow (Debian stable). Neither of these would be issues except for the fact that 1) software updates often slow down to a crawl or stop (Ubuntu PPAs alleviate this issue but you are at mercy of the maintainer) and 2) support even for LTS versions has never matched the longevity of Windows support. As someone who has used an LTS to end-of-life and simply never reinstalled (I get no updates on my netbook anymore), I will say that it sucks because all the repos die. Unlike Windows, it is a pain in the ass to install “new” (really old versions of software that is supported on your OS but must be manually compiled because repos are gone) software due to hunting down dependencies and compiling manually. Even if Win 95 isn’t supported anymore, it is still easy to install software so long as you get the installer.

Half-Rolling Release Model – This has the least problems for Linux. In fact, I would consider this to be a very good tradeoff if it weren’t for this issue – software updates tend to stop coming along with the core updates. So while Windows and OS X are getting the latest versions of software, Linux ends up lagging behind. Chakra Linux is the only candidate that seems to do this right BUT they are not a stable distro. Their main issue is that they still haven’t released a GUI updater that is able to handle major upgrades without user intervention. Everything is still command line and involves editing of files (they derive from Arch so they have the same pain points of doing manual file editing when something in the core changes). LMDE, meanwhile, has a great updater but is based off of Debian Testing which is embarrassingly slow at getting software updates.

Rolling Release Model – Nothing like Windows or OS X and would blow them out of the water IF it were stable. Always having the latest and greatest is an issue because you don’t know how well a given piece of software is tested. As an Arch user, I’ll give a very recent example I had and still cannot solve. Ibus was upgraded in the Arch repos to an unstable version because GNOME 3.x did something to make the previous version of Ibus incompatible with it. Of course, the unstable version is… well, UNSTABLE. It literally does not work in all but one use case (Japanese Anthy) and even then, that is the only part of the software that works. Every other feature (configuration, appearance, etc.) is broken. Solution? None. You have to find a way to roll back to a stable version while resolving the dependency rollback yourself. All of this is manual and all of this is painful. Last point to make is that rolling release distros tend not to have any sort of easy to use graphical updater due to the nature of the model.

You can see that Linux is tantalizingly close to having a great model for releases, but none of them are perfect. They all fall short in major ways that make both Windows and Apple standout as competitors. But this isn’t the worst part about Linux support. What really stings is when the support for your DE dies before your distro does. Specifically:

GNOME – Support for 2.x ended and while stable, it isn’t bug free like Windows Explorer or OS X. This includes software like Nautilus. While Debian Wheezy will have GNOME 3.x,  Squeeze still has 2.x and there likely won’t be any way to upgrade it without upgrading the entire OS. MATE is an attempt at keeping this alive but that isn’t exactly bug free either. You can make endless comparisons about how Windows is worse than even GNOME 2.x, but all users of XP, 7, and eventually 8, will know that bug fixes won’t suddenly end before the OS itself is unsupported. This tying together of distro and DE support is something GNOME does not have and because of it, they should take into consideration end user usage instead of selfishly doing what’s best for the GNOME project. Hence why I still believe GNOME 3.x should be an alpha/beta instead of an actual released DE.

KDE – This is where the 3.x to 4.x move bothers me. Full rewrites are never bug free and often much buggier for a few years after release. The fact that KDE pulled the same thing as GNOME and released a fairly unstable 4.x line without maintaining the 3.x line simultaneously is something that really would have screwed end users. Again, they have no distro to tie support to (NetRunner and Kubuntu are close as they are sponsored by Blue Systems but there is no official KDE distro) so they really should consider end users over their own project’s interests. Trinity did pick up the 3.x line to continue support, but just like MATE, it isn’t exactly a good alternative except in the short term. Users will not want to deal with the hassle of converting to Trinity or MATE until the new line (4.x and 3.x respectively) becomes stable a few years down the road.

But there is hope. Outside of the big 2 (GNOME and KDE), there are other distros maintaining their own DE which means updates forever on their LTS versions. Unity, Cinnamon, and Pantheon all fall under this and for any user using those three, being able to see the DE slowly improve without having to upgrade the base OS is great. However, the keyword here is improve. As mentioned earlier, none of these DEs are stable enough to be suggested as alternatives to Windows or OS X.

Then there are the simple, stable DEs – XFCE, LXDE. Both of these are pretty distro independent and their updates are fully dependent on which release model you follow. But as mentioned earlier, they aren’t good candidates because they don’t offer a clear improvement over Windows or OS X.

So there you have it. Why Windows and OS X beat out Linux even after decades of innovation. I still have hope that the community will eventually get it right (heck, even I’ve been trying to find the right combination of software to make an unbreakable XFCE Archlinux desktop) but who knows when that will happen. For now, I’ll continue following these distros in hopes that one of these will make that next big step into being THE perfect Linux distro. One with all of the right pros and none of the crushing cons.

  • Linux Mint / Linux Mint Debian Edition w/ Cinnamon DE
  • Ubuntu w/ Unity DE
  • Chakra Linux w/ KDE
  • Elementary OS w/ Pantheon DE

I have been following all of those since they’ve been created and the future looks bright. Let’s just hope it isn’t far away.


VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Diablo III – Endgame Theories pre-1.04

This game is all about farming, but end game farming is boring because:

1. Leveling is so easy that you don’t need to save up mid-range gear. You can get to 60 without much difficulty and on gear you get while playing. This means that there is no need for twinked characters to help them level up to equip endgame gear for the final grind. This kills the excitement players get from finding any good low or mid game gear. Hence why the AH is all about finding the best of the best and anything less just sucks (even if it is good).

2. No custom character builds (or rather, the options are very limited) cause niche items to simply not exist. This is compounded by how most weapons are interchangable no matter what skills you use (any 1 handed weapon is fine on a monk). Since no one is making a Dream Sorc or a Ranger Paladin in DIII, you won’t be finding any whacky items that may make people go, “I want to make X character build using this item.” This further reduces the chances of someone finding an item and going, “Not the best but cool!”

3. No stat points reduces the utility players get out of certain item modifiers. Finding a nice low req, high str/dex item in DII was much cooler than finding a similar item in DIII. This combines with #1 and #2 in that you don’t need such items to help you super twink a character for faster leveling.

4. Reduction in endgame goals has reduced DIII farming into a single goal – finding the ideal set of equipment for any character. In DII, you can get to the endgame and do many things:
farm for cool items
try to get to 99 in the ladder race
hunt Ubers
goldfind for gambling
farm for crafting items
In DIII, your equivalent options are:
farm for top notch items
goldfind for buying on the AH
farm to craft items

As mentioned, farming is a lot less exciting because you are getting more “trash” now than in DII because of the limited utility from many stat modifiers. It should be obvious that goldfinding for gambling is much more exciting than for buying on the AH. Lastly, crafting is very different in DIII than in DII because the items you need to craft are a lot less varied. It also overlaps heavily with farming for good items in that any trash you find can be salvaged.

Then there are the missing endgame goals. No ladder means that you don’t need awesome items to clear areas faster for better exp. No PvP means you don’t need to look for specialized PvP gear. What you need items for are beating Inferno (once) and for faster farming. Since Inferno can only be so hard, that means you only need gear that is “good enough” because the game requires little to no skill to play (bosses and monsters are easy to read). Meanwhile for farming, if you are farming for better farming gear, it ends up being VERY pointless.
Thus you only have 2 endgame objectives to make the game “fun” of which one of them requires sitting in front of the AH looking for that exact item you want/need and the other is to farm just enough to beat the game.
Or… you farm for items to sell and make real world money.

5. The new stat point design, while nice in that it makes it easier for casuals, does not resolve the ultimate problem with DII stats – everyone still pumps vit and occasionally a primary (dex for zons, dex for block, strength for gear). DIII ends up being worse in that characters are so reliant on damage that they must pump their primary stat along with vit. Everything else is useless (def from str is pitiful, res from int is also pitiful, dodge from dex is pitiful). The further compounds the “stats rule all” theory I have for items in DIII. Everything else (save for a few useful mods like mf, all res, crit, leech, and attack speed) is useless. Then again, I haven’t seen enough other mods to know what else exists.

Overall, I’d describe the endgame goal of farming as… weird based on how Blizzard designed the game. There isn’t enough content right now to merit farming other than to make money. I just don’t see what the point is in farming with the current game.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

50% of New Users Stick with Google+

So there has been some talk about Google+ dropping in traffic by some 60% according to Chitika’s statistics. Looking at the graph and comparing it to the Google Trends info about Google+ leads to some fairly obvious and interesting conclusions.

Google+ 2011 Trends

First off, the info about those letters from Google Trends:
[A] Google launches Facebook rival ‘Google+’ – Sydney Morning Herald – Jun 29 2011
[B] Google+ social network membership tops 10 million – The Province – Jul 15 2011
[C] Google+ social network adds games – Ottawa Citizen – Aug 12 2011
[D] Google+ opens to everyone, takes fight to Facebook – Zee News – Sep 21 2011

If we use Paul Allen’s user base estimate (an interesting read), we can see that the second spike’s (point D) downward slope correlates to about a 14 million user base increase. This would be from Sept. 21st (the start of the traffic increase on Chitika’s graph) to Sept. 27th (the end of Chikita’s graph). Considering the estimate that there are about 50 million users after this spike, we can conclude that the user base shot up by 33%. Seeing as how new users will be posting several times more than established users, 33% increase of users causing a 60% traffic spike seems fairly reasonable. Of course, the result is that traffic will drop back down to normal levels afterwards. The most interesting points to then compare on Chitika’s graph are the average traffic numbers from before the opening of Google+ and the average traffic after the spike settled down. The numbers from the 18th to 19th seem to average out to 50 while the numbers from the 25th to 27th average out to about 60 on the traffic index.

So what do we know? A 33% increase in user base from 36 million to 50 million correlates to a 20% increase from 50 to 60 on the traffic index. So this new group of 14 million users ended up having a lower usage rate of Google+. How low? Well using these numbers, we can estimate that every 1 million users from the initial base results in 1.39 points on the traffic index. Meanwhile, every 1 million new users results in 0.71 points on the traffic index. That either means newer users use Google+ about half as much as established users OR only about half of those 14 million bothered to stay after seeing what Google+ had to offer. I’m leaning towards concluding that Google+ has a 50% retention rate as human behavior isn’t going to be drastically different in terms of site usage over a large sample size.

Another bit of interesting info is that if you compare the jump from release to their first peak at 10 million users (A to B) and to the spike in September prior to opening to the public (no doubt induced by rumors) on the Google Trends chart, you’ll see that the magnitude of the slopes are the about the same. Considering how the first 10 million users were invite only and the second coming of 14 million users were from open signups, I’d say Google wasted their buzz value (based on Search Volume Index). How much did they waste? If we estimate the Search Volume Index points for the slopes to be about 2, then it took about 0.2 points to get every 1 million users during invite only and about 0.14 points to get every 1 million users after opening signups. So Google squandered about 25% of their publicity buzz after their initial announcement.

Now let’s take my crappy statistical analysis a step further and say that had Google opened signups from the start (or after a 2-3 days of conservative initial testing), they would not have squandered their marketing and gotten a 25% larger user base with 100% retention/higher usage rate (1.39 traffic index points per 1 million users). That means prior to Sept. 21st, instead of 36 million users at 50 traffic index points, they’d have 45 million users at about 62 traffic index points. The following week would have simply continued the trend of signups and they would have hit the 50 million mark anyway. However, you can already see that their average site traffic would be much higher and their signup rate could have possibly accelerated after hitting critical mass instead of slowing down and needing the adrenaline boost of open signups.

The major counterpoints made by Facebook that lead me to this conclusion are the introduction of the better Group/Friend List feature, the much more clear (albeit still not perfect) privacy settings, and the more streamlined interface for managing what is shown to whom on your Facebook page. These were all pushed out in response to Google+ and seeing as how only 50% of users after open signups decided to stay, I’d say that Facebook made a very good counterattack. Stunting Google+’s effective retention rate by 50% is enormous and will guarantee that Facebook stays in the lead for quite a while (perhaps long enough to turn Google+ into a niche social network). All of this could have been avoided by Google if they opened signups immediately. The fallout of their closed beta period is resulting in the same backlash that I predicted: lack of engagement by newer users post open signup, lost momentum (though, not a total loss, just less efficient, and now they have to fight based on features. Features that, I might add, are easily copied by Facebook. Their more unique features, like Hangouts, also have to compete with established products like Skype and all manner of chat systems.

If Google+ ends up falling to the wayside, the wasted marketing due to not immediately opening up the service will be the primary contributing factor to its failure. I just hope that doesn’t come to pass as we consumers need competition in this space.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)