Firefox 29 – Not the most customizable Firefox ever

So I upgraded to Firefox 29 and was pretty excited after reading their claim that it was the most customizable Firefox ever. Upon opening the browser, I was greeted with a very new look and here are my first thoughts:

  • The bar with the buttons, address bar, etc. got a hell of a lot thicker. So despite saving space in the tabs, they went backwards and ended up eating my precious vertical space.
  • Where the fuck is the refresh button?
  • 10 seconds later. Oh, there it is in the most hard to click spot ever (size + being a but off means you click the address bar dropdown instead).
  • How do I get my navigation controls back to how I want? Home, Forward, Back, and Refresh buttons.
  • 3 minutes later. Ok… maybe I’m blind but I really don’t see that Refresh button in the customization options.
  • 2 minutes later + some Google searching. Ok, I’m not blind. Firefox took it out.


WTF Firefox?

How can you call this the most customizable Firefox ever when you REMOVED the ability to add a refresh button (among other options for navigation)? I can see all the work that went into this new version but right now I’m trying to find a way to downgrade because of this usability fuck up.

I should have seen this coming after the whole “let’s remove features because no one uses them (except for all of Africa)” discussion. Firefox is starting to get the GNOME Shell bug where the designers are beginning to ignore users in an effort to push their own vision of simplicity over functionality. Do you know what I say to these designers? You suck at your job.

Rather than try to find a way to better organize and group features, you remove anything you cannot deal with. You implement things in a way that works for a single workflow instead of keeping things flexible to accommodate all users. That makes you all hacks. Unable to think outside the box, you decide to shove everyone into your box.

Have you designers ever thought about how this would effect users with poor eyesight? Or those with disabilities who may not have 100% accuracy when moving their mouse? Or how about people who don’t use a standard DE and place their menu bar vertically on the left/right (hint: moving all of the browser controls to the left/right is a huge usability boon for these people)?

No you didn’t think of these people.

And if you did, then that’s worse because you consciously ignored them.

With Firefox going down the same route as Chrome, who do we non-90% users turn to? Chrome is just as bad. Opera is effectively dead. IE was never good. Safari isn’t exactly good on non-OS X systems. All of the various Linux browsers? Nope, not powerful enough. This trend of companies trying to push everyone into their own box is very disturbing. It’s hard to make sense of and leaves me wondering what I can even do.


Right now, I guess the only thing I can do is stand on my box and shout.

VN:R_N [1.9.22_1171]
Rating: 5.0/5 (4 votes cast)

Can you trust your own government?

So I’m sure you have all heard of PRISM, Edward Snowden, Glenn Greenwald, and Lavabit. I’m also sure you have heard of Wikileaks, Bradley Manning, and Julian Assange. You might have heard of Aaron Schwartz. You’ve definitely heard of PIPA and SOPA. And you know what, this list is endlessly long. But there are two pieces of recent news that I’d like to talk about.

First is the David Miranda incident and it’s aftermath. Ever since the internet exploded, we have slowly but surely outgrown the government’s control of mass media. Propaganda is harder to spread since there are many on the internet willing to take time and debunk lies and myths that the government attempts to spread. Unlike printed media, you cannot control all of the voices in the blogosphere. Everyone can create their own soap box to stand on and speak. Because of this trend, the government has to resort to greater intimidation tactics to try and influence those who are speaking out. These tactics are both highly unethical and borderline illegal. Laws are twisted and sometimes flat out ignored by the government. And you know what? This is the future. This is where we are all headed. A world where a Chinese police state is the norm except the public has no idea that it’s happening.

Do you think your government is different? That your politicians aren’t corrupt? Let me tell you this, America isn’t the only bad apple in the basket. Every government is capable of violating your privacy because, as all technologists have known for a long time, not all protocols are perfectly secure. And even if they were, all it takes is the government exerting secretive powers to strong arm companies and individuals into doing their bidding. And when they choose to protect their users instead of betraying them? Their companies and lives are destroyed ala Lavabit. We live in a world where hard working, honest people can be threatened by governments at will without repercussions. A world where people like David Miranda can be detained for bogus reasons just to intimidate them into being quiet. A world where being the Bolivian president doesn’t mean jack shit.

And you know what, good people will continue to fight the good fight. But how long can that keep continuing? Especially when personal possession can be confiscated and destroyed without compensation. Without reason. Without and legal standing. All all without any recourse from the individual. Or when technology can be compromised or simply forced to shut down ala Lavabit or Silent Circle. Apparently not long as this bring me to the second bit of news I want to highlight. Groklaw is shutting down.

Email is a vital part of communication in the modern world as are SMS (texting), VoIP, XMPP, and various other protocols. But just like phonecalls and mail before them, all governments are capable of intercepting these messages. We never had privacy which is why anonymous sources and secretive meetings were some of the few ways confidential information was passed along. But in our technological age, where confidential information can come from all over the world, we are reaching the point where nothing may be confidential anymore. If the government has years of history on every as well as access to all of their private, personal data, then we are at a dead end. Encryption? That won’t last. Secure channels? Doesn’t exist when the companies running the hubs have been compromised. The only option is to risk your sources and fight or shutdown in the biggest way possible so as to make a splash that will hopefully ripple into a wave of political activists.

Groklaw is taking the second route. For that I am thankful but at the same time, I mourn the loss of perhaps the greatest objective news source in the intellectual property fight. Without them, it is not a stretch to say that Android would not exist. That the current state of the internet would not be possible. Or that we would have a choice for computing software/hardware outside of Microsoft and perhaps even Apple. I sincerely hope their death will not be in vain.

So I guess it’s time to ask, can you trust your government? I don’t for the obvious reason that I live in America and we have had the blessing of the PRISM leaks. Snowden gave us a fighting chance to force the government to reveal all of its corruption. Everyone else? History and recent evidence says no. So how about we all come together and think of a way to take back control of our lives and our governments?

VN:R_N [1.9.22_1171]
Rating: 2.0/5 (1 vote cast)

One Microsoft, One Life Left

Yo Microsoft, let’s sit down and talk.

You have a “new strategy” (which really reads like a vision so that’s what I’m calling it) that states:

Going forward, our strategy will focus on creating a family of devices and services for individuals and businesses that empower people around the globe at home, at work and on the go, for the activities they value most.

Compare this with the old vision of “put a PC in every home” (paraphrasing) and which seems more concise? Which can you read and say, “Now that is one heck of a vision. How do we get there?” As much as I like some of your developer products and a few other things, your new “strategy” is representative of what has slowly driven me away from using Microsoft products. I just don’t get you as a company.

Where exactly do you want to go? Your vision is grand but has no clear goal. How do you measure progress? Or rather, a better way to say what I mean is, how do you know that you are headed in the right direction? As a consumer, I am worried when I buy and use your products. Not because they are headed in a new direction, rather it is because I don’t think your vision will result in me using the best possible product.

Compounding the issue is that your vision is big but has no “WOW” factor. It’s not something that makes people think, “Damn, would that be cool if it happened.” Why? Because you are simply following industry trends instead of paving a new path forward.

And there is the crux of Microsoft’s struggles. You are no longer a pioneer, so you’ll always be one step behind everyone else. In a world where interoperability is inevitable, with various open protocols and where even the closed ones at least have APIs, your vision is not something that surprises me. Hell, it’s something I already expected from you.

So please Microsoft, I beg you to reach deep down and pull out a vision truly befitting of your vast resources.

Or, you know, ask Bill for some advice. His post-Microsoft philanthropic goals have the same “WOW” factor as his original vision at Microsoft. It’s clear he is a natural visionary and that is what Microsoft needs. And yes, this is a jab at the Microsoft leaders. I don’t think any of them are tech visionaries. Instead, I believe they are all “business visionaries” (if that even exists) whose only talents are growing a company. Not helping a company innovate. And I feel this will lead to the death of the Microsoft we all once knew.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

A Farewell to All Your History

Written in response to the final episode of All Your History:

End of a great YouTube series on gaming history. And now to share my own thoughts about the show.

1. I know the various people who were a part of the show still have ties to Machinima but to say that the the number of people who disliked the change in the show’s direction is a minority is a HUGE understatement. This is the only time I’ve ever seen an unpopular change in a YouTube series/channel result in:

  • repeated downvotes (almost outnumbering the upvotes on a weekly basis which is unheard of),
  • endless negative comments (deserved or not and oftentimes removed by Machinima even when comments were informative about the change in the show’s style),
  • and a large enough drop in viewership (subscriber count + the effect of poorly liked shows on YouTube which will effect the parent channel) that a series was actually cancelled.

So let me make this clear. I have no negative feeling towards any of the show creators (even Chris Lockey), but saying the dislikes were a minority is being dishonest. Sure it may be a technical minority (< 50%) but it clearly was > 40% due to the like counts and comment ratio (and estimated drop in subscribers), but this group of fans deserve to be labelled as an important part of the viewership. Hell for all you know, the other 60% may be a 30/30 split between positive and neutral making the “minority” an actual majority. So don’t play the PR game and call those viewers a minority. Minorities do not get shows cancelled.

2. As a representative of the quiet masses who were willing to let Season 5 cook for a while before returning, I gotta say that yes, we noticed the change in Season 4. The main reason why many didn’t speak up about the slight change in style was because we saw it as a positive change and it did not deviate from the spirit of the show. As Brian & Brian combo even said, they mostly kept to the same style even though they made a few changes. It is human nature to be quiet and not speak up when changes aren’t bad.

Season 5, on the other hand, did not take a gradual approach. It made a huge change in style from the start which was bad for several reasons. Many of them are actually terrible business decisions so I’m not sure how those changes were approved.

  1. Abrupt change with very little lead in will always generate harsh feedback. Unless you’ve tested your changes against focus groups to make sure they are actually positive in the long run, it is a terrible business idea to make such large changes at once. Examples? See any fucking big change to a website, TV Show, comic, etc. EVER. You use Facebook, right? Remember all those times they made huge changes to their UI, privacy settings, etc. and the negative feedbad that caused? Yeah, that is what you should expect. The difference between All Your History Season 5 and Facebook? Facebook does a shit load of testing and even does a small beta rollout to random users over time to fix things before going all out.
  2. To cut staff down to one person and expect them to maintain quality is simply a poor business decision. It reeks of cost cutting and greed. The quality of every aspect of the show is guaranteed to go down by doing this so I’m not sure how Machinima expected the show to keep subscribers going into Season 5.
  3. If you cut staff, it is generally a safe business decision to stick to the tried and true until changes stabilize. So telling Lockey to go with his style from the start is again, a terrible business decision. Example? I’ve been in 2 companies that downsized. During periods where we downsized, we put any huge features involving major changes to our product were delayed just so we could make sure we can maintain the same quality of our current product. It’s just safe business practice.
  4. The changes to the show were across the board – intro, narrator, writing, show content/focus, and production style. If you are going to change everything, you might as well change the title too because that is all that is left of the spirit of the show. This is essentially a new series at this point so using the name “All Your History” is a pretty goddamn poor business decision. You are telling your fanbase to expect one thing while providing a completely different product. NOT SMART BUSINESS!

Seriously, I 100% bet that if Machinima allowed Brian & Brian to continue the show using whatever hypothetical changes they wanted to make, the change in narrator to Chris Lockey would not have been all that negative. In fact, since those two proved they understand the audience, I bet their changes would have worked out in the long run and have had a better chance at increasing viewership (clearly the goal of Machinima). So I’m very curious to hear from Machinima why those two were not allowed to execute their vision for Season 5.

3. I can tell that some of the people previously involved with making All Your History did not agree with the changes either. Of course, their responses are going to be politically safe and not bash the show outright but your feelings show in your responses when taken as a whole. It’s not my place to tell you how to answer questions, but I will say this, “Thank you for putting your heart and soul into the show.”

Bet you didn’t expect that right? Seriously, thank you guys for making the show. Anyone asking you guys to come back are being selfish. Hell, Chris Lockey probably isn’t that bad a guy and even I most likely fall on the “dislike” him side of his polarizing personality. However, it’s not his fault. If there is anyone to blame, I put the blame solely on Machinima for making poor business decisions that anyone with half a brain could tell would lead to the death of the show. So fuck you Machinima!

4. As a response to mainy of the YouTube commenters who are angry at seeing the show end – guess what? Ending the show was the goal of the negative fanbase. After the first few episodes, the responses illicited from Lockey and Talbert, and the realization that the changes were here to stay, negative fans decided that taking the dog out back and shooting it was the most humane thing to do. And that is why the dislikes and negative comments kept rolling in. No one is sorry that the show was cancelled. That was the goal. If you are going to be angry at anyone, be angry at Machinima for all the reasons I stated above. The show was primed for failure and that is what happened.

5. Lastly, I’d like to point out a key “draw in” factor about the show that the creators, crew, YouTube commenters, and really everyone missed. I haven’t seen a single person post this yet. As much as the style, informative nature of the show, narration, and quality were important to the show’s success, there is one other factor that drew in AND kept the current audience. Nostalgia. This is a show about gaming history. One of the things viewers of our type enjoy is reliving games from our past, learning things we didn’t know about them, and hearing about what the legacy of those games ended up being. Current games are simply not going to be as interesting to watch in a documentary as games from the past because nostalgia hasn’t built up and you aren’t going to be able to draw any good conclusions about the legacy of those games.

The fact that no one has brought this up in the final two episodes is a bit disappointing. It makes me think that everyone avoided pointing this out because it would show how big a fuck up Season 5 was because the shift in style ignored this very important “draw in” factor about the series. It is why the style from Season 1-4 was so successful. Keeping an old school feel was essential to providing the type of atmosphere that made viewers feel like they were reliving their past. I feel like this change was something one or two people were itching to say but didn’t for PR reasons (and I’m ok with that). But I feel like this really needs to be said and should be something for the clueless business people at Machinima to think about.

For my final farewell to All Your History, I’ll say one last thing:

“Thank you for being the most informative gaming history show I’ve ever watched. It is rare for a show to be both entertaining and enlightening but you managed to stick to fulfilling those two principles for 4 full seasons. That is amazing and this series will be sorely missed.”


VN:R_N [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

Experience Gaming on Ubuntu – Part 1

I have a Windows 7 / Linux dual boot desktop for gaming, general web browsing, and media playback. I built it about a year ago and during that time never booted into the Linux partition (running Mint Debian). Why? Too annoying to get gaming to work on it (though Desura ran fine if I recall). Instead, I always booted into Windows and played games there. That was perfectly fine except the computer would randomly black screen and crash due to some video card driver – kernel conflict. I spent about 2 weeks trying various fixes proposed by Microsoft and none worked (even RMA’d some parts hoping that would do it but no luck). Despite this, I still used Windows cause that’s where all my games were.

In comes Steam on Linux. With this release, I carved out a weekend to change my Linux partition over to Ubuntu 12.10 (I wanted LTS but after much research and testing, it turns out my video card requires a newer kernel in order to get sound working over HDMI). After building my system from the ground up using the minimal CD to ensure no bloat, I installed Steam & Desura and gave the system a trial run. I’ll get into my gaming experience in another article, but first, my experience with Ubuntu in general.

As a disclaimer, I am clearly not a Linux newbie. I’ve been using it as my primary work system for almost 5 years now. All of my computers are either dual boot or pure Linux. I even have a massive BSD server running ZFSGuru and my laptop (that has Nvidia Optimus) run on Arch Linux. So yeah, if I have any trouble or pain points with a distro, I can guarantee that any “normal” end user has zero chance of solving the problem. And oh boy, did I have some pain points.

Linux + Graphics Cards… Bring on the pain!

For reference, my configuration is:

  • Intel Pentium G630 SandyBridge CPU
  • ASRock H77M-ITX Motherboard
  • HIS Radeon HD7750 (Silent version with no fan)
  • A-DATA XPG Gaming series DDR3-1600 RAM AX3U1600GC4G9-2G (4GB×2)
  • SilverStone SST-SG05B-B USB3.0 Case with 300W PSU
  • Hitachi HTS725050A9A364 500GB 2.5″ HDD
  • XBox 360 Wireless Receiver + 4 wireless controllers for gaming

It’s built to be portable, consume very little power, be able to handle games at decent settings (which it does), and be absolutely silent (which it is).

So what pain points were there in getting Ubuntu running? The GPU driver. Everything else worked out of the box and didn’t require me to open a terminal. Great start right there. But when it came to drivers, I went through hell and back to get everything working perfectly. Here is a list of all the hurdles I had to go through to get this working:

  1. Uninstall Linux Mint 14 and go to Ubuntu (with Unity) because it turns out that there is an open bug with the Cinnamon Desktop + Steam that causes most games to just display a black screen when in full screen mode. You can check out the bug progress here:
  2. Uninstall the drivers from the repositories and install the 13.1 drivers manually. The ones in the repo are too old and give you a “Graphics Card Unsupported” message on the screen when combined with my GPU.
  3. Disable the automatic underscan via the command line. The Catalyst Control Center does NOT save settings and apply them after reboot. So to get the stupid underscan to go away, you need to run:
    sudo aticonfig --set-pcs-val=MCIL,DigitalHDTVDefaultUnderscan,5
    The 5 will set the under/overscan to 0%. Yes, there is no documentation on what the number represents anywhere and I had to test it out to see what it did.
  4. Reinstall Ubuntu (yeah, FUCK!) from 12.04 to 12.10. Out the window goes my plan to use an LTS release. This is required because you need a newer kernel than what is provided on 12.04 in order to get sound via HDMI to be recognized by the OS.
  5. Set the sound to HDMI by default and… wait, that doesn’t work. Turns out you need to run alsamixer to unmute the channel, set the card, and then use speaker-test to get the sound to start properly working. Yes, it doesn’t work until AFTER you run:
    speaker-test -c 2
    I figured this one out by tons of testing and putting together clues from all of the failed attempts by other people to get sound over HDMI working with the fglrx drivers (yes, I believe I am the first to get this working). Oh, and no this has nothing to do with me installing via a minimal CD. I tried this out on the default installs for Ubuntu 12.04, 12.10, and Linux Mint 14 as a sanity test before figuring all this out.

So yeah, FUCK AMD DRIVERS! As much as I try to like and support ATI/AMD, their Linux support has gone down the shithole ever since my first (perfect) experience with them 5 or so years ago.

Cool, the GPU drivers and sound are all working now. Amazingly, I had no other hardware issues. I didn’t have to cast insane commands via the terminal to get networking running, or do any custom modifying of conf files to get suspend working (no hibernate since I don’t use swap). Everything else just worked out of the box. Even the wireless XBox 360 controllers / receiver (minus a few noted bugs with the LED lights mentioned on the Ubuntu Wiki).

Next comes the software. What I did have trouble with was configuring Unity to not do annoying stuff like display results from the Software Center (remember – minimal install so no Amazon already, but damn, they baked the Software Center stuff right into the core Unity libraries). Also getting things like the notifcation area to appear, get certain indicators working, and deal with lens crashes (still don’t have the Photos Lens working yet) were a bit of a pain. None of this is stuff you want newbies to deal with. But at least it isn’t impossible to solve with the help of Google. As for my experience using Unity, I think it is a good DE but still needs a lot of polish. Especially with regards to application discovery. But overall, the experience is good since I am quite used to the layout (I have Windows 7 setup the same exact was on all Windows partitions).

And then the moment of truth, installing Steam, Desura, Emulators, Wine, and PlayOnLinux to run games. I’ll get to each of these in my followup posts.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

2013, PlayStation 4, and the mark of the beginning (or end) of the console era.

Like most people, I’m following the PlayStation 2013 Event (albeit, filtered via Reddit updates) for news about what Sony has in store for us in 2013. And it’s interesting.

The rumored (leaked?) prices for the PlayStation 4 are $429 and $529. If true, that means the next gen consoles have finally entered into PC level pricing. Combine this with the move to x86, which will make it far easier for gamedevs of all levels (indie, AAA studios, etc.) to create cross-platform console/PC games, and potential up and coming consoles will have a much greater range of strategies to use to compete with PS4 and probably whatever Microsoft decides to come out with next for the Xbox. Specifically, the two big players to watch are the OUYA and SteamBox.

The OUYA is already clearly going to undercut the PS4 as it targets the market for low end and indie games. They are far cheaper as a console and much more portable making it easy to use as a LAN system. It will be interesting to see if the ability to play locally with friends can beat out playing over the net (the clear move Sony is making with the share button on their controller). In addition, gamedevs now have an interesting targeting choice for their primary platform – develop cheaply on the OUYA or put in cash to develop on Sony (which should be easier with the move to x86 but there is still a publishing cost associated with Sony). Indie devs will probably target the former while AAA studios stick with the latter mainly due to the impossibility of running AAA titles on an OUYA based on startup costs. However, it is full possible that there will be a shift for the AAA studios over the next few console generations depending on the performance of the SteamBox.

The SteamBox is really just a PC under the hood. Early prototypes have already shown that they are targeting a small, portable form factor which will head on compete with the OUYA in terms of group play. However, they also have the Steam network built in which will allow them to compete, and in my opinion, trump Sony’s services. This leaves Sony to compete with the SteamBox on price and hardware platform to garner the attention of both gamedevs and consumers. Unlike the OUYA, which uses an ARM CPU, the SteamBox is most certainly going to use an x86 compatible CPU like the PS4. This will make it much easier for gamedevs to create cross-platform games on both the SteamBox (PC) and the PS4. Because of this, there is a very good chance that both indies and AAA studios will target both platforms by keeping the majority of their code base portable and using Greenlight as a measuring stick to see if polishing the game (porting any non-compatible code, testing, and control tweaks) for the PC is worth the cost. This puts the PS4 in a position of competing with hardware; a losing position in my opinion as the SteamBox now has plenty of pricing breathing room to outspec (or undercut) the PS4.

With Sony’s admitted failure of the Cell on the PS3, they realized that gamedevs have far more power over the success of their platform and that their name will not hold strong forever. So their move to x86 is both an indicator that they will concede power to gamedevs, and in a bigger move, Sony will be banking their future on their hardware platform and services (really, more the services than hardware). This isn’t a bad move as the very calculated risk can have great long term rewards for the company if pulled off well. By using their current market position, they can maintain a healthy lead in premier titles over unproven consoles (seen by having launch titles such as Destiny, Final Fantasy, and even Diablo III!) while at the same time repairing or building up good relationships with AAA studios. If they can use their market position to buy time to fix all the issues with their gaming & social services before their next gen console is released (most certainly going to completely close the console-PC gap for specs per price), then they can close the gap or overtake the Xbox (superior multiplayer) and Steam (superior social) as the gaming platform of choice. A very profitable market as everyone is moving to the online purchasing model (goodbye retailers, aka: GameStop).

On the flip side, if the SteamBox (or OUYA – unlikely) can draw indies and AAA studios to their platform, then this may mark the beginning of the era where consumers begin moving towards the PC as their gaming platform of choice.

Of course, this is all speculation and very much based on how the big AAA companies decide to move in terms of target platforms. This of course is based on how consumers will see the PS4 and either balk or swallow the price tag. Very circular but this definitely puts the standalone console market into interesting times as the way the next few years play out will definitely be discussed by and analyzed by the industry as a major turning point for consoles or explanation on why PCs failed.

Why PCs failed? Wait what? Yes, this article doesn’t end here. There are a few other major players (aside from Xbox – the main competitor to Sony and potential industry changer) that can change the entire gaming ecosystem. They are iOS and Android (and maybe Ubuntu; we’ll see how they do. Random aside: I’ll write about Ubuntu in the future as I already switched my gaming desktop away from Win7 to test how well Ubuntu holds up now with Steam.). Yes, I’m talking about the mobile market (phones, tablets). They have already made a clear impact in the handheld gaming market previously dominated by Nintendo and only partially dented by Sony. Especially since Sony is again copying Nintendo and merging the PS Vita play experience with their console, the PS4. Mobile devices are becoming increasingly popular as casual gaming devices and if they can garner enough power, they may become a better platform for AAA gaming in the future due to their “play anywhere” capabilities (this will require innovation in gameplay but I’m certain this will happen). If this happens before the SteamBox can take flight (or after the SteamBox gets crushed by consoles), then PC gaming has a very good chance of straight up dying for the mainstream. Yes, there will always be a lot of PC gamers but the market will forever remain a niche as future games will increasingly move to other devices. Imagine if you can have great MMO gameplay on a mobile device that isn’t a clunky laptop. The social capabilities are endless.

So this may also mark the beginning of a shift in competing platforms from consoles vs PCs to consoles vs mobile (phones, tablets, and I forgot to mention, the web).

Speaking of casual gaming and mobile platforms, Nintendo is (and has been for a while) in a crazy interesting position. They clearly target a different kind of consumer – casual gamers of all ages. Their core market already doesn’t see a standard PC as an alternative because of the nature of their games (Wii still beats the Kinect and PS Move) and how family friendly their franchises are (Mario, Kirby, etc.). Mobile is their only threat which is why the Wii U was created (a clear counter to the growing mobile platform by making it so you can mobile style games on the Nintendo but with more gameplay design choices). The only thing holding them back from creating a stranglehold on that section of the market (and potentially crippling mobile platforms from competing with consoles & PCs) is how hard it is for gamedevs to create games for Nintendo. It will be interesting to see what moves Nintendo makes after seeing Sony move to x86 and giving up so much power to gamedevs.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Google Attempting to Unify Google+ and YouTube Accounts

Interesting. Google is now pushing for having people use the same name on their Google+ and on YouTube (and probably across all Google services). This would mean having your real name on YouTube. Yeah… how about no?

To expand on my decision:

1. There are simply some services that you don’t want to display your real name on. YouTube is one of them because you can potentially have an account for posting things like streamcasts and the like where you are well known based on your online handle and not your real name.

2. I already get annoyed by now the default display for YouTube is not Uploads Only. I can only imagine what would happen once Google starts integrating all their services and I suddenly get YouTube videos posted by friends showing up in on my YouTube page.

3. Or vice versa, in my Google+ page. I complained almost instantly about the lack of grouping/filtering for your stream in Google+ which got fixed like a year later (way too late, you failed Google; this is one reason why everyone just stuck with Facebook). If they do start merging services and updates, let’s just say that I don’t think they will put in any worthwhile fixes for their initial overreaching/broken implementation anytime soon. So it’s better to just tell Google to fuck off straight from the start.

4. Laws haven’t caught up with privacy in job hiring yet. So I personally avoid linking my real name to my online presence as much as I can. You never know when an employer will Google your real name in search of what you do online. Sure I can secure my public info like I painstaking do with Facebook, but really, why go through the trouble? If I just stick with an online handle, I don’t have to think about this problem at all. There are just no benefits I can see between linking the two unless you are trying to streamline your online presence.

Which takes me to why I believe Google is now making the push here. It was pretty damn obvious we’d get to this point, and it all depending on Google+’s progress. Now that they have a fairly good group of celebrities, companies, businesses, and even the government using Google+, they want to provide more “benefit” towards these groups by unifying their online presence. This will give all of their various Google service pages more visibility and make it easier to manage… for them. For the rest of us average joes, what benefit does this provide us, Google?

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Desktop Environment Rewrite Failures – The Corner Cases

DE rewrites fail due to many obvious reasons – bugs, new workflow, new paradigm, etc. But one of the less obvious failures is that a mature DE has had time to implement features and fixes for various corner cases. Things that effect maybe 1% of your userbase. When DEs go through a rewrite or complete redesign, a lot of those features are tossed aside and labeled as something to be worked on later. GNOME 3 has so far failed at addressing this. Instead they are relying on their Extensions framework as a way to try and rectify corner cases. This is poor, no actually, completely irresponsible design on their part. And this is why.

Corner cases are by definition a problem that effects very few users. The chances of a user within that small percentage being able to design, code, and maintain a solution for a particular use case is so infinitesimally small that it isn’t a proper solution. Then you add in the fact that GNOME 3 hasn’t found a way to not break extensions at each update and you get a 1% of users that cannot function using your DE.

Ok, so 1% is small. No biggie. This is a stupid way of thinking. You have to remember that there are various, sometimes overlapping, groups of 1% corner cases that cannot use your DE. All of these tiny groups add up to a very large percentage of inconvenienced users. Hence why GNOME Shell gets so much flak. You piss off every vocal minority group and offer no help to resolve their issues. Even going to far as to claim some are invalid (shutdown option debacle). These situations can be handled much better (Cinnamon is doing a great job of acknowledging concerns and Unity is at least better at this than GNOME, though not perfect) and until they are, I think GNOME Shell will eventually cause so much fragmentation in Linux DEs that Linux will never be able to catch up to Microsoft or even Apple in popularity. With Steam on Linux coming out soon, we have an opportunity to grain ground in the desktop space but with so many flat out idiots running crucial projects in Linux, I’m afraid we will blow out chance to maximize this opportunity.

Here’s to hoping one of the following groups pulls off a miracle during this crucial 1 year time frame – Canonical with Unity, Linux Mint with Cinnamon, anyone with KDE, and elementary with Pantheon. These are the people I’m pinning my hopes on (and I say this as an XFCE/Compiz user). I hope one of them manages to catch up to GNOME 2’s feature set and overall polish/simplicity this year. If not, Linux will forever remain a non-factor in desktop computing.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

I am not Aaron Swartz, but are you?

I’ve had a long enough time to process my thoughts about Aaron Swartz’s death. If you don’t know who he is, I urge you to read his Wikipedia page, checks the news, read the transcript from Aaron’s memorial service, filter out the hyperbole, and focus on the facts to form your own conclusions about what kind of person he is and about the situations he faced. Other greater people than I have expressed their thoughts on the matter in a clearer manner than I could, so I will not express my thoughts on justice here. Instead, I will focus on Aaron’s death and what this means for me.

As usual, I was reading Hacker News today and reading about the latest article from yet another famous person about Aaron’s death. I don’t know why I keep reading these articles. I guess I am expecting to find some sort of life changing inspiration or trigger some sort of epiphany that will lead me to the forefront of the activism movement. But really, I’m not that person. It’s not who I want to be. I will be a supporter, someone who speaks out and gives donations, but even with my own technical skills, this is not a direction I can see myself taking. I am far too selfish and desire wildly different things in life. Does this make me a bad person? I hope not, but I cannot shake the feeling that I should be doing more.


Aaron’s death is one that has transcended him from being an activist to a martyr. So says the many people who are voicing their opinions and pushing changes like Aaron’s Law. Here is this person who attempted to change the world and it is only after his death that we, as a society, have begun to move towards enacting his visions. Except we aren’t. The initial reaction is to prevent another tragedy like this from happening. Sure this is progress, making technical activists feel safer, but all we are doing is putting up signs saying, “If you want to step up to the plate, you can now do so knowing that 1 of the 3 umpires is now fairer.” We should be taking this opportunity to step up to the plate to swing. Not get into a pissing match with the an umpire. We have the crowd on our side and if anything, this is the time to make drastic moves like Aaron did and force changes that will better the world. Screw how crooked the umpires are, ignore the other team, just step up and swing damnit! Swing!!

I guess that is why I feel so antsy. I think this is why I feel like I should be abandoning my life, my dreams, to pick up Aaron’s work. But… that’s not who I am. I’m just part of the crowd and the best I can do is cheer. So please, someone, anyone, step up and give me something to cheer for.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Windows vs Linux – Stability and Predictability

Disclaimer: I wrote this in a single, several hour long session without doing any editing or research. These are simply opinions that I felt like shouting out to the world. If you disagree, agree, or have suggestions on expanding this, feel free to comment below.


I’ve been thinking about why it is so hard to convince people to move to or even just try Linux whereas people are much more open to the idea of using OS X. Of course there are the usual suspects like gaming. But with Steam arriving on Linux, there are almost no games that my friends play that isn’t playable on Linux. Or drivers, but that has been a slowly improving situation. So long as you pick certified hardware, then this isn’t an issue. Then you have marketing creating misconceptions, but quick demos and a recommendation from the guy who fixes all their computers easily dismisses those incorrect notions. And of course there is the “I don’t want to reinstall everything” excuse. However, I’m only suggesting this to people getting new hardware and simply suggesting they try it out on a Live CD or VM (which I will setup). And yet, it is still very hard to change people’s minds. So what is it that Linux lacks or what scares people away from even trying Linux? I think it all comes down to stability, predictability, and support.


Let’s first look at stability. No, not stability in the sense of the computer crashing (BSOD anyone?) but rather in the sense of environment stability. In all major, popular versions of Windows (95, XP, 7, and perhaps 8 in the future), the desktop environment (I’m including the window manager, file manager, and absolute basics in here.) has been rock solid stable. There are very few bugs noticeable to the normal user. Windows itself may crash but Windows Explorer has been one of the least buggy interfaces I’ve ever used. Compare this with the primary default DEs in major Linux distros – GNOME, KDE, Unity, and Cinnamon. Also look at some other non-scary to Windows users DEs – XFCE, LXDE, Pantheon (Elementary OS’s DE). Out of all of these distros, how many have been the paragon of stability over the past 5, 10, 15 years?

GNOME 2.x – As much as I loved GNOME 2.x (with or without Compiz), it wasn’t exactly bug free. It was stable by time I started using it but I still ran into a lot of bugs that made me think, wtf? While it outpaced Windows Explorer in features, those new features often brought with it endless amount of annoying bugs.

GNOME 3.x – I like the idea of GNOME Shell but have been adamant since release that it was released too early. 3.0 should have been an Alpha test and they should have only moved this to a Beta after a few releases with extensions in place. Even now, they aren’t anywhere near RC state. For anyone who wants stability, I would steer clear of GNOME Shell. Too many bugs and too much jumping before thinking with how they handled extensions (who here has out of date extensions after upgrading?).

KDE 3.x – I wasn’t around for this so no comment here. However, the fact that it was dumped for a complete rewrite in KDE 4.x will be covered in my support section.

KDE 4.x – I tried this out when it was new and damn was that shit buggy. I’ve tried it on an off for years and I will say that it is fairly stable now. However, it does have performance issues and the sane defaults set by most major distros don’t make it standout from Windows Explorer because Windows 7 copied features from KDE (and did it without stability issues to boot). For KDE 4.x to really shine, you need distros to thoroughly customize KDE to vastly outshine Windows 7. This is possible but I have never seen any distro come out of the box with 1) well designed eye-candy settings that mesh super well like in Pantheon and 2) an easy to understand walkthrough of how to use the DE features in KDE. I’ve attempted doing both but for someone not well versed in KDE, the plethora of settings is just too overwhelming to wade through and create a polished DE.

Unity – I will admit, I like Unity because it is setup similar to how I use most of my DEs (launcher on side). However, this has the same problems as GNOME Shell in that it is still too buggy to use or suggest.

Cinnamon – I like the direction this is going but there is still a TON of polish and basic bugs that need to be ironed out before I would suggest this to someone coming from Windows. Hell, the text cursor in the menu still doesn’t blink and we are already at version 1.6. What is up with that?

XFCE – Version 4.10 finally caught up with Windows 7 (aero-snap) and Thunar 1.6 finally had them outshine Windows Explorer with tab support. However, as a full-time XFCE user, I will say that this environment is not 100% better than Windows 7. It is about even on features and stability. So telling friends to try out XFCE isn’t going to convince them to change.

LXDE – Same boat as XFCE but fits a slightly different niche. I wouldn’t except using this to convince anyone to use Linux over Windows.

Pantheon – The most interesting of the new DEs that I have been following since inception. It is still too new though and only recently went to Beta status, so again I wouldn’t suggest this to people who want a stable DE.

Looking at all of these DEs, it is clear that Linux is on the front of innovation. The problem with that is that too many bugs are introduced that screw with the user experience. If you pick a stable DE, then you end up with a situation where Linux is simply on par with Windows. And finally, the last piece of the puzzle – OS X. This DE takes a middle ground between Windows and Linux – it slightly more innovative than Windows, much more polished than Linux, and crazy stable like Windows. While it does require a paradigm shift to switch to OS X, users are content knowing that things will work as they expect. This is what will be needed by Linux DEs to convince users to switch away from Windows.


While innovation is great, it is clear that stability is more important. What innovation that does get introduced though, must be predictable. Both Windows and OS X only introduce new features that have been thoroughly tested by focus groups and tweaked to seamlessly integrate with the current DE. Innovation in Linux is like the wild wild west. New stuff is being implemented all the time but you have no idea where things are headed. In fact, it is often the case that Windows and OS X simply take the most popular new features in Linux, test it into the ground with focus groups, and then implement it in a super stable way months/years down the road. They are basically Debian but much faster. To give an example of what I mean, let’s walk through the DEs again.

GNOME 2.x – Very predictable. Nice! But it is no longer supported which leads to my third issue discussed later.

GNOME 3.x – Very unpredictable. Boo! In fact, this is the least predictable of all DEs I have ever encountered in my life. With each release, I have no idea what will change and what muscle memory I will need to retrain to use this DE.

KDE 3.x – Same comment as above.

KDE 4.x – Very predictable. Nice! But there is no tutorial to teach you what everything in the DE does. Seriously, I’ve looked everywhere for one and the information out there is simply too sparse and contains too few examples for anyone to learn KDE without trying everything out themselves.

Unity – Sort of predictable but Cannonical has shown that they aren’t afraid of making fairly big changes to the GUI (moving the min/max/close buttons anyone?). Since this DE is still fairly new, I would be afraid of suggesting it to anyone for fear that other big changes are in store.

Cinnamon, XFCE, LXDE, Pantheon – All very predictable. Nice! In fact, the reason why they are predictable is because it is easy to be predictable when a DE focuses on simplicity and getting a few things right at a time.

So we have a few candidates here which can be both innovative and predictable. However, they all fail the stability test above which is fairly important. They are also fairly young in most cases and that leads to my final problem.


Let’s first get this out of the way. Microsoft is clearly not the paragon of support. Neither are the Linux Communities with the differing personalities leading the various software projects. I don’t know about Apple so I won’t comment on them. What I mean by support is expected longevity and ease of upgrade.

Windows generally doesn’t upgrade all that often and when it does, they are major upgrades that people expect to be a painful upgrade process (e.g., anything dealing with reinstallation). What they do well is that their OSes don’t have many major releases. Instead, you just continually get updates for as long as Windows is supported, which for Windows is a VERY long time. Hell, my older brother still uses Windows XP. Combined with how software updates don’t just stop on Windows, you can technically have the most recent version of all your software at all times on supported versions of Windows.

Apple, from what I understand, is an easy to upgrade system. In fact, I believe upgrading major versions is less painful than on Windows 7 and their support in helping you migrate is top notch. They are (fairly close to?) the ideal half-rolling release model.

Then there is Linux. You have a choice of normal release models (Ubuntu, Debian stable, Fedora, etc.) similar to Windows, half-rolling release models (Debian Testing, Chakra Linux, OpenSUSE Tumbleweed, etc.) similar to OS X, and rolling release models (Arch, Gentoo, etc.). For each of these major models, I will break down the issues users have with Linux:

Normal Release Model – Updates are either too frequent (Ubuntu, Fedora) or too slow (Debian stable). Neither of these would be issues except for the fact that 1) software updates often slow down to a crawl or stop (Ubuntu PPAs alleviate this issue but you are at mercy of the maintainer) and 2) support even for LTS versions has never matched the longevity of Windows support. As someone who has used an LTS to end-of-life and simply never reinstalled (I get no updates on my netbook anymore), I will say that it sucks because all the repos die. Unlike Windows, it is a pain in the ass to install “new” (really old versions of software that is supported on your OS but must be manually compiled because repos are gone) software due to hunting down dependencies and compiling manually. Even if Win 95 isn’t supported anymore, it is still easy to install software so long as you get the installer.

Half-Rolling Release Model – This has the least problems for Linux. In fact, I would consider this to be a very good tradeoff if it weren’t for this issue – software updates tend to stop coming along with the core updates. So while Windows and OS X are getting the latest versions of software, Linux ends up lagging behind. Chakra Linux is the only candidate that seems to do this right BUT they are not a stable distro. Their main issue is that they still haven’t released a GUI updater that is able to handle major upgrades without user intervention. Everything is still command line and involves editing of files (they derive from Arch so they have the same pain points of doing manual file editing when something in the core changes). LMDE, meanwhile, has a great updater but is based off of Debian Testing which is embarrassingly slow at getting software updates.

Rolling Release Model – Nothing like Windows or OS X and would blow them out of the water IF it were stable. Always having the latest and greatest is an issue because you don’t know how well a given piece of software is tested. As an Arch user, I’ll give a very recent example I had and still cannot solve. Ibus was upgraded in the Arch repos to an unstable version because GNOME 3.x did something to make the previous version of Ibus incompatible with it. Of course, the unstable version is… well, UNSTABLE. It literally does not work in all but one use case (Japanese Anthy) and even then, that is the only part of the software that works. Every other feature (configuration, appearance, etc.) is broken. Solution? None. You have to find a way to roll back to a stable version while resolving the dependency rollback yourself. All of this is manual and all of this is painful. Last point to make is that rolling release distros tend not to have any sort of easy to use graphical updater due to the nature of the model.

You can see that Linux is tantalizingly close to having a great model for releases, but none of them are perfect. They all fall short in major ways that make both Windows and Apple standout as competitors. But this isn’t the worst part about Linux support. What really stings is when the support for your DE dies before your distro does. Specifically:

GNOME – Support for 2.x ended and while stable, it isn’t bug free like Windows Explorer or OS X. This includes software like Nautilus. While Debian Wheezy will have GNOME 3.x,  Squeeze still has 2.x and there likely won’t be any way to upgrade it without upgrading the entire OS. MATE is an attempt at keeping this alive but that isn’t exactly bug free either. You can make endless comparisons about how Windows is worse than even GNOME 2.x, but all users of XP, 7, and eventually 8, will know that bug fixes won’t suddenly end before the OS itself is unsupported. This tying together of distro and DE support is something GNOME does not have and because of it, they should take into consideration end user usage instead of selfishly doing what’s best for the GNOME project. Hence why I still believe GNOME 3.x should be an alpha/beta instead of an actual released DE.

KDE – This is where the 3.x to 4.x move bothers me. Full rewrites are never bug free and often much buggier for a few years after release. The fact that KDE pulled the same thing as GNOME and released a fairly unstable 4.x line without maintaining the 3.x line simultaneously is something that really would have screwed end users. Again, they have no distro to tie support to (NetRunner and Kubuntu are close as they are sponsored by Blue Systems but there is no official KDE distro) so they really should consider end users over their own project’s interests. Trinity did pick up the 3.x line to continue support, but just like MATE, it isn’t exactly a good alternative except in the short term. Users will not want to deal with the hassle of converting to Trinity or MATE until the new line (4.x and 3.x respectively) becomes stable a few years down the road.

But there is hope. Outside of the big 2 (GNOME and KDE), there are other distros maintaining their own DE which means updates forever on their LTS versions. Unity, Cinnamon, and Pantheon all fall under this and for any user using those three, being able to see the DE slowly improve without having to upgrade the base OS is great. However, the keyword here is improve. As mentioned earlier, none of these DEs are stable enough to be suggested as alternatives to Windows or OS X.

Then there are the simple, stable DEs – XFCE, LXDE. Both of these are pretty distro independent and their updates are fully dependent on which release model you follow. But as mentioned earlier, they aren’t good candidates because they don’t offer a clear improvement over Windows or OS X.

So there you have it. Why Windows and OS X beat out Linux even after decades of innovation. I still have hope that the community will eventually get it right (heck, even I’ve been trying to find the right combination of software to make an unbreakable XFCE Archlinux desktop) but who knows when that will happen. For now, I’ll continue following these distros in hopes that one of these will make that next big step into being THE perfect Linux distro. One with all of the right pros and none of the crushing cons.

  • Linux Mint / Linux Mint Debian Edition w/ Cinnamon DE
  • Ubuntu w/ Unity DE
  • Chakra Linux w/ KDE
  • Elementary OS w/ Pantheon DE

I have been following all of those since they’ve been created and the future looks bright. Let’s just hope it isn’t far away.


VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)