Posts Tagged ‘ rant

Get ready to play the blame game! Management vs IT

So a recent article on Ars discusses the increased demand in IT workers. For those of us in the industry, it comes as no surprise that demand for internal staff would eventually bounce back up, BUT there is a downside to the increased demand of which we are all well aware. Once companies start hiring locally, the blame for any screw-up will be put on the newly hired team of insourced IT. Any grand successes will be awarded to the CEO/CTO/Management Team for their “brilliance” in choosing to move away from outsourcing.

That leaves us IT workers (sys admins, developers, designers, etc.) in a precarious position. Our next job may leave a black mark on our resume, bonuses are not guaranteed, and raises will be minimal. Is it then a wonder that the hiring trends (in terms of employee requirements) have shifted to:

  • Full-time work rather than contract work.
  • Fewer interviews rather than putting up with being grilled (Hey, company X! You aren’t the prom queen!).
  • Higher salaries. We aren’t idiots. We know demand is up because of the increased amount of hiring spam we get from recruiters (mine personally has more than tripled in the past year).

You see, IT workers are not idiots and, like most people, are not looking to be burned more than once; especially in this crap economy. A comfortable job, good pay, and long term safety are priorities for those that choose to work in corporate IT. If we wanted exciting highs! Awesome new tech! A chance to become rich! We sure as hell would not be looking to work for established businesses and networking with recruiters. Instead, we’d take our talents to startups and network with VCs.

For the businesses that finally understand that majority outsourcing DOES NOT WORK (some outsourcing is ok, but if you hit majority outsourcing for IT, you are fucked), you better realize that the fallout from betraying former insourced employees has created a enormous expectation gap. It is up to the CEOs, CTOs, Management Team, and HR to find a way to appease those that have been once (or for the unlucky, twice, thrice, etc.) burned. You are also negotiating with people who do critical thinking for their jobs. We know, nay expect, that you will be putting the blame on hiring insourced IT for any management fuck-ups that lead to a worse fiscal year. In fact, I’d be willing to bet that if a company’s revenue doesn’t increase after a two or three years of “investing” in insourced IT, management will draw the incorrect conclusion that insourced IT == outsourced IT. Guess what? Anyone who thinks that is stupid. If your fiscal reports just flatline, that means insourced IT stopped your falling bottom line. We bucked the trend and are in fact contributing greatly to the business. Any expectations for insourced IT to be saviors are unrealistic. Only in startups, innovative companies, and firms that invest heavily in research can IT actually make that type of impact. In most other places, we have no power to decide the direction of a company, the products, or in some cases, the technology we get to use! Doing better requires better decision making up top and for that, who should be blamed? Hmm…

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

The Fallacy of the Milgram Experiment

Disclaimer: I’m writing this with only the knowledge I obtained by the Wikipedia article so perhaps I am making an incorrect conclusion from lack of evidence.

I read through the experiment after someone mentioned it as a proof that people follow the orders of an authority figure even if they thought it would cause harm or would lead to them killing someone. I’m calling complete bullshit on that right now. For some reason, the scientists administering the tests seems to have completely ignored the intelligence level of the subjects in the experiment. Specifically, the ability of a person to draw conclusions about safety based on prior experience. You can see a hint of it in the quote under the Ethics section for those that went through with the experiment.

My only hope is that members of my board act equally according to their conscience…

Which then confirmed by what one of the early withdrawers did:

In the journal Jewish Currents, Joseph Dimow, a participant in the 1961 experiment at Yale University, wrote about his early withdrawal as a “teacher,” suspicious “that the whole experiment was designed to see if ordinary Americans would obey immoral orders, as many Germans had done during the Nazi period.”

Both of these accounts suggest that humans that are participating in a scientific experiment would make some of the following assumptions:

  • The scientist is a benevolent dictator. What I mean by this is that the scientific community has a standard of ethics when doing human experiments and should not conduct an experiment that causes irreparable harm to the subjects. Logically, subjects would assume that even if the experiment does something harmful, like administer shocks, that the damage is within reason and acceptable. Why? Because that is what we expect from our authority figures. This is even proven true by the variation which changes the governing body to a less prestigious one:
Experiment 10 took place in a modest office in BridgeportConnecticut, purporting to be the commercial entity “Research Associates of Bridgeport” without apparent connection to Yale University, to eliminate the university’s prestige as a possible factor influencing the participants’ behavior. In those conditions, obedience dropped to 47.5 percent, though the difference was not statistically significant.
  • Fellow participants have agreed to take part in the experiment. This is important as the “teacher” automatically assumes that their counterpart subject, the “learner”, is willfully participating in the experiment. If not, the “learner” would have just left after being told of his role in the experiment. Thus, despite the “heart condition” that is mentioned or all the cries of pain, the “teacher” has no reason to believe that the “learner” feels that this experiment is unsafe for the “learner’s” health.
  • The voltages are safe to administer. As seen by this example:

Milgram himself provides some anecdotal evidence to support this position. In his book, he quotes an exchange between a subject (Mr. Rensaleer) and the experimenter. The subject had just stopped at 255 V, and the experimenter tried to prod him on by saying: “There is no permanent tissue damage.” Mr. Rensaleer answers:

“Yes, but I know what shocks do to you. I’m an electrical engineer, and I have had shocks … and you get real shook up by them — especially if you know the next one is coming. I’m sorry.”

In a 2006 experiment using a computer simulation in place of the learner receiving electrical shocks, the participants administering the shocks were aware that the learner was unreal, but still showed the same results.

  • Any participant that could call bullshit on the scientist did not continue with the experiment. Once they reached a point where they knew actual harm was being done (even if just mental damage), they quit. Luckily for the scientists, they didn’t have 100% educated engineers as part of their subject pool. Otherwise, I guarantee that the results would be the complete opposite of what they got. Furthermore, the followup experiment using a known visual “learner” is just complete bullshit. It doesn’t prove anything as the “teachers” in this case know that they aren’t doing any harm at all. So why the fuck would they stop?

Do I need to continue? The common theme here is that humans will make certain assumptions about safety based on what they’ve learned growing up. Take a pool of applicants that grew up in a safe, law abiding neighborhood and you will see a higher rate of obedience because they will think, “Hey, if everyone participating in this experiment agrees to it and the scientists are (or in their mind, should) good people, then what harm is there in delivering the shocks.” Take a group of well educated engineers that know what happens mentally and physically from administering shocks and obedience will drop. They will call bullshit on the safety claims by the scientists much earlier. Take a group of delinquents from a poor area and you will end up with wildly varying results. They will either do the experiment because they enjoy causing people harm or they will just stop earlier because the don’t trust the authority figures.

The common theme is that people aren’t stupid. That is the primary uncontrollable variable that comes with human psychological experiments. It seems as though scientists performing the Milgram Experiment are eager to reach a compliance conclusion. So much so that they are ignoring the intelligence of the participants. So please people, don’t use this experiment as “proof” that humans will blindly follow authority even if it causes harm to others. That just isn’t true.

VN:R_N [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)