Tuesday, June 30, 2009

Feeling My Way Through A Manager's Role

I recently was put in charge of a handful of temporary help. They are working on some tasks that don't fall under the heading of "skilled labor", and chance are we'll never see them again after their temporary hiring period passes. They know this. We know this.

But this was the first time I was put in charge of a group.

I've never done this before. I've wondered what it would be like to manage a business or be a supervisor over other people; it's kind of like the "not my parent" syndrome, where we promise we'll be better parents than our own parents because of course we know everything our parents did wrong.

The problem is that we don't focus on what they did right. Granted, in some cases you may not have had good management over you before. But this is a problem when you're suddenly thrown into that position with little to no experience in the field, so I am going through this with a trial by fire.

One problem is that these aren't full time workers. They're temporary. They have no investment in the organization, so if they decide to screw around the worst that can be done is they are told not to come back...because they are young temporary help this really isn't a huge blow to them in the long run.

Here are some observations and tenets by which I try to work with:
  1. Never dress down/criticize employees in front of other employees.
  2. I try to make clear notes and instructions of what is expected of the people being managed.
  3. I don't micromanage them. They have a set of goals and a deadline by which to achieve them. If they give reason for me to watch over their shoulders I will. Otherwise I keep tabs on their progress through spot-checks.
  4. I believe in not asking them to do what I wouldn't do. But at the same time...related to the item above...I'm not watching over their shoulders and double checking every line item. Otherwise I might as well do it myself; I'm repeating effort and it's a waste of time if I do that.
  5. Scheduling and organizing is hard if you're not at the top of the chain. I have sent several queries to other department on which our schedules depend on their progress and goals with their projects. I get very little response to those questions and continue to get last minute surprises from people.
  6. My Asperger self likes to schedule and project and derive useful statistics and be able to keep track of our group's progress. Other people, I think, don't plan at all. Things I thought would have been roughly mapped out with other groups yield blank stares when I ask. This means that there is a good chance that at least part of our goals won't be realized in large part because other groups are throwing up roadblocks.
  7. I want to give people a chance to prove they can't do the job rather than vice-versa. I also want to give them feedback along the way.
  8. I always try to tell them if they're doing a good job with a particular task without going overboard. Compliments are meaningless if doled out like candy, but morale suffers if the workers don't know what they're doing right or wrong.
  9. I think employees want to do well with their jobs. I try to keep that in mind while working with them.
  10. I try to facilitate their work. That means providing the tools they need to get the job done and being available for questions and help. If I'm not going to be around I make sure they know who to contact and how to reach them.
  11. I do have them keeping lists and charts of their progress. This doesn't mean that they will always stick with it, but this should reduce the number of items overlooked while trying to get tasks completed.
I really don't know what else I can do. I am trying to take over some of the work that most people find tedious and I've spent a lot of time trying to get things organized and covering bases with what they'll need to do in order to cover the goals before the end of their work period and I'm hoping that they can appreciate the clarity I try to bring to their goals. I think that clear-cut goals give them milestones and a sense of accomplishment. At least I hope that is what it does.

At some point I'm going to try finding a way to get feedback from them on how they think I'm doing as a manager, but I don't know how to do that. If anyone has any ideas I'm open to hear them...

Sunday, June 28, 2009

Private Data From the Government: Ironic Edition

People rarely stop to think about their data. I always found it amusing...working in a repair shop I have more than once run across data that you'd think from the average Joe that had walked in wouldn't be downloading. I've had calls while working with an Internet Service Provider asking how to get an accidental wallpaper image removed before the caller's wife came home (you shouldn't have clicked the "set as wallpaper" while staring at those, sir...). And yet these people don't give a second thought to the data they're exposing when they send their computers off for repair to some stranger.

Turns out than in most bureaucracies the problem still remains, usually due to oversights, overwork, understaffing...but in the end, the problem still remains and there are "incidents".

The part that is sad is that even for people like me who do take time to think about these issues there is plenty of data on me through the government and various businesses (you know those cards you use for discounts at stores? They don't do that out of kindness, boys and girls...your shopping habits are worth big money) and I have no control over what is done with those records.

I just wanted to remind everyone of this after I ran across this story here about American government information being found on a hard disk purchased in Ghana (that's a country, in case you're an American where ignorance is a freedom we tend to like taking full advantage of...HA!). For $40. The data included, among other things, information on government contract information. Awarded because of the ability of the vendor to keep date secure.

The vendor is Northrop Grumman. You may of heard of them. They do a lot with our defense department...one of the big names in the world of toys that make things go boom.

They blame it on a vendor they hired to securely dispose of their hardware. The third party vendor doesn't know how it happened.

Whoopsie.

Tuesday, June 23, 2009

Computer Geeks and Their System Care

Ever hear the quip about a plumber's sink is always clogged and a mechanic's car never runs?

Well most of the geeks I've worked with were in an environment where we spent most of our time repairing everyone else's computers while our own systems...well, they worked, but they were cobbled together in a style that showcased a "let's get it working" methodology.

Cases would have missing panels, video cards weren't screwed in, cables were loosely attached...the lab was a mishmash of parts and wires running all over the place.

I was reminiscing over these memories because my computer, a rather stocky system with hardware RAID mirroring and some neat little additional features all assembled (rather nicely, by the way) by the folks at Puget Systems about three years ago or so for me at a decent price at the time...suddenly started making a rumbling noise.

I recognized the sound as an errant fan. The thing is that peering through the clear plastic window on the side of the case (yes it has internal lighting; I had them do it when I ordered because I'm not a case modder, if you can tell from my previous description of my own systems and those of my friends usually were Frankensteinian concoctions) I counted no less than four fans plus one hidden in the front of the case, and that didn't count the mini cooling fans in a 5 1/4 slot on the front of the case as well. One of the fans was buzzing.

Bearing going bad? Something loose? I couldn't tell offhand. A quick smack on the case would make the buzz disappear for a few minutes but then it would return. I opened the side panel and carefully prodded the fans. I found that if I gently lifted...applying slight pressure, that is...the giant fan covering the graphics card on my system the sound seemed to stop. Let go of the fan and the card drooped slightly and the sound started up again.

Great.

Several years ago my wife bought me a neat Transformers watch that came in a collectible tin in the shape of the red Autobot symbol. That tin happened to be the right height to prop up the video card...so now there's an Autobot symbol facing through the plastic window on the side of my computer, applying enough pressure that the buzzing noise stopped.

Part of me questions the wisdom of putting something metal in there to do that. I figure I'll need to order a new video card at some point to swap it out. I take comfort in a couple of things;
  1. Computers are sensitive to shorts. If you short something out it'll generally die right away. Trust me. I slipped a screwdriver many many many years ago while a computer was running and the end hit the legs on the memory chips on the motherboard; Windows NT (yes, it was a 3.x version of Windows NT) blue screened on me. Whoops. Rebooted just fine though. Lesson learned. Sort of (obviously).
  2. Computers are EXTREMELY sensitive to goofing with the video card. If that card slipped a little out of the slot or overheats...generally any little problem with the video...the system will crash HARD. The tin slipped into place and the system kept right on chugging along.
  3. Most shorts or issues will have the machine shut down rather than blow up in a massive fireball. So far...nothing that I know of has happened.

This doesn't mean that it's wise to put that into the case. I figure it's a temporary measure until I get around to ordering a new video card. In the meantime it just gave me a few moments of reflection on memories of the past. And, of course, make backups of my data.

Monday, June 22, 2009

Top Posting in Email

One of my biggest pet peeves is top posting in email.

People do it all the time and it drives me bonkers.

First, what is top posting? It's pretty self-explanatory. You write an email to someone. When they reply, they hit "reply" and just start typing, leaving everything you said quoted below their added content. They post new content at the top of the email, leaving everything else intact below. Their mail program usually adds more helpful crap in too, like who quoted it, when, who else it was to...nice cruft. Really business-like people add four or five lines of helpful signature at the end declaring that the content is meant for particular users and if you got it in error you should delete blah blah blah...official sounding nonsense that never ever in my searching has ever been tested in court, so it's largely a waste of space.

People who top post generally fall into one of two categories. "I didn't know better, sorry...", and those who come up with every excuse in the book why they do it.

What it really boils down to is they do it because they're lazy.

See, for older people especially, email is instant messaging. Before the Internet became a household phenomena where even Grandma was pecking out how-do-you-do's to people with Outlook Express the idea of mail meant writing something that took a week to get to the other person. If you were lucky you'd have a reply in a week and a half from the recipient. Because of this cost in time and effort people crafted messages to one another. You took time to put some thought into what you were saying.

Email made it really easy to just splat whatever brain fart comes to mind.

I recently had to email out a message to a number of people to coordinate a series of items. The message consisted of several paragraphs with various aspects of the projects spelled out. The first person who replied had one line at the top of my paragraphs: "We can do that next month."

Huh? It took me five minutes of puzzling what he was talking about.

So how should you email people in reply?

I advice you use this thing called common sense. You read the message and quote information back. You address each item you wish to reply to by replying directly under the previous content you are quoting that is relevant to your statement. That way the email should read as either a note that is standalone...in which case you delete the previous content and not quote anything...or your email should read as a conversation.

I used to say the reason I prefer this is because I was literate; creating emails like this makes the message clearly understood. The recipient knows that you are specifically referring to, and doesn't need to think about what you could mean...it clears up ambiguity.

This also trims messages down to take up less space.

It keeps me from having to bounce from the top of a message down and back up again whenever I need to figure out what you're talking about, especially if you combine several topics into one top post.

In short, it makes your messages easier to understand and forces the recipient to concentrate on the message, not translating what you're trying to say.

People who send these to me and I ask them politely not to would sometimes get quite upset...apparently I imply they don't know how to write or they interpret it as such. Here's their typical reasons...

It's how the mail comes up when I hit reply.
(Yeah? So? This goes back to crafting a message.)

I include previous emails like that so I don't need to dig for it in my sent items.
(why do you keep sent items them? Do you work in the Department of Repetition and Redundancy?)

What's the big deal?
(The big deal is that if you're like me and have to deal with a crapload of messages a day, having to spend the extra cycles figuring out what you're talking about gives me a HEADACHE. Not to mention that it makes you far less effective a communicator.)

I understand the message just fine.
(That's nice. Too bad I'm the one that has to understand what you're talking about. If I understand German just fine, is it okay if I email you in German?)

It doesn't hurt anyone doing it that way.
(Except when you accidentally send that bit of sensitive information out to everyone else because you were too lazy to trim it from the quoted portion...whoopsie!)

I've come up with two ways to deal with it. The first is the delete button. I do this in help lists quite a bit. I was volunteering my time to help people using Ubuntu when possible, and in return I had help when I needed it from them. But more and more it seemed people were top-posting in reply. So I started deleting them. I don't need to waste my time on something that is frustrating me.

Sometimes I get messages that are largely irrelevant to me on private emails so I skim and if I think it's nothing I need I hit delete. If someone needed something frome me they'll contact me and I'll ask them for the specific information I needed out of their top-posted message.

The second way I deal with it is to print the message out and then sit to read the relevant information. Environmentally conscious top-posters hate it when I do that; the message was important enough to interrupt me and force me to waste seconds of my life figuring out what you had to say, then it's worth the life of a tree for me to figure out your message without scrolling up and down and up and down. Besides, it had to be important for you send the entire thing, right?

For the most part my irritation at top-posting has gone down considerably since I started doing this and stopped helping people on mailing lists. Once the constant barrage of these type of messages died down I found that having a couple messages once in awhile top-posted didn't bother me as much, probably because in private mails there usually isn't as much material that needed to be followed as a thread of conversation (technical lists have back-and-forths that can span tens of emails).

There's nothing wrong with firing off a one-liner email. It's when you reply quoting my message and I either have no idea what exactly you're referring to because it's ambiguous (which comment did you mean? What are you talking about?) or you quoted my message which had absolutely nothing to do with what you're talking about. Why?? I guess it's part of my Aspergers...but I can't help but think that this is a side effect of the instant messaging and instant gratification culture technology is cultivating. The more I see these types of emails the more I wonder if we really aren't losing something in not having to write things out as actual snail mail messages anymore.

Just...read your message and think about the recipient. Will I understand your message with minimal effort? Do you need the quoted material in there? If not, highlight it and delete it...please!

Sunday, June 21, 2009

Microsoft's PC Ads

I know this has been known for awhile in the blogosphere (or people with common sense) but I was thinking about it because I saw the ad campaign on TV again.

You may have seen it...some doofus off the street, a real Jane Average Consumer only this time she's named Lauren, is given a little over a thousand bucks and told that if she can find a system with a particular set of specs for that amount or less she can keep the computer. It's like winning the lotto!

Lauren goes to the Apple store and immediately heads back out to report to the camera that there's nothing within that price range that is near those specifications. "I guess I'm not cool enough to be a Mac person." Ooh, SLAM!

She ends up buying a wonderful WINDOWS computer from someplace...Best Buy, maybe? Woo hoo! Jackpot!

One problem. She never went into the Apple store. The proof is in the still shots captured from the advertisement.

C'mon. How can you screw up staging something so simple? Really. How dim do you have to be to screw up something like that?

Quick business tip. When you're dealing with the subject matter of technology, even if your target audience is meant to be the average computer-ignorant consumer, don't skimp on the details because geeks will call you out on it. Then they'll advertise it in the same Webbertubes that your average computer-ignorant consumer in the market for a computer may run across the information.

Know your audience as well as your potential audience.

And don't mess with geeks.

Wednesday, June 17, 2009

Enjoying the Experience, Part Two

Okay I promised it yesterday so here it is, the followup.

I posted that information hoping to give some background on why I feel justified in mentioning this particular pet peeve. Hey, I did say that it would loosely tie to that experience issue...and it does. Every time I run into this issue it kind of leaves me peeved.

I was recently involved in troubleshooting an issue with some software in which part of the problem was tied to printing. In talking to the tech person, who also happened to be one of the developers of the product (good hint this is a smaller software vendor, but it's a mixed blessing...yes, you get good support from someone who knows the intimate details of how the software goes about a task to accomplish a goal, but you also have to deal with people who are juggling fixing bugs and issues with your incessant neediness to have a product that works and your subsequent whining when it doesn't). I'll call the developer Carl.

We had a printer, shared from a Windows print server, set up on the computer. Try sending the print job and it acted like the program was getting "stuck" and the program would hang. Killing the application would corrupt some data files. Real pain in the rear issue.

I looked at the logs on the printer server system and it was seeing the print job; it listed the username under which I was logged in and a document called "no name" followed by a job size and then it said it printed zero pages.

Huh?

"The server sees the print job and it has a size with it, but said the job was zero pages long. Could there be an embedded code in the print job that's keeping the printer from rendering it?" I'd run into issues like that before with bad PDF files and certain print jobs on particularly buggy drivers.

"No, that's normal. We don't use the Windows GDI or anything like that to render the print job. We send it straight to the printer in PCL."

Uh oh.

Carl is basically telling me that their print system from their application is bypassing Windows for certain things...I have to have the printer set up on Windows, then open their app's settings, select the printer, at which point their software is doing something to render the print job and send it to the printer rather than a "good" application which hands the print job to Windows and says, "Here, print this," and Windows runs off giggling to do just that. A second system didn't have the printers installed that the application said it was pointing to in order to print...and the application worked on that machine.

Let me reiterate. Windows did not have the printer installed. But the application had it in the printing settings. And the application worked. Carl speculated that at one time the printers were set up in Windows and that's how the application got the initial settings and so deleting the printer from Windows didn't affect the application...

Adding to the confusion was the fact that despite Carl telling me their product doesn't interact with the Windows printing system, it was obviously doing something with it or else I wouldn't see it showing up in the Windows logs on the server. Going directly to the printer would bypass this step. What was showing up was goofy (it printed fifty pages, but the logs said it was a 0-page job. Figure that one out.)

Ugh.

This is where user experiences come into play. I get a bad taste in my mouth whenever a developer decides he or she knows better than the OS "way" of doing something and boldly leaps forward with their idea of how something should be done, everything else be damned.

One of the few things Windows did right was unifying the printing system for applications. In the DOS days every app that wanted to print had to include some set or subset of drivers and you configured each application to see your printer. If your word processor didn't support your particular printer or you couldn't find a driver for your word processor to see your printer you prayed that the word processor had a printer driver that was "close enough" to work. Then you repeated this process with your spreadsheet software. And any games that happened to print maps or scores. It was a nightmare at times.

Once Windows hit the personal computing scene every application could just include a library that allowed them to hand the job to Windows. Windows needed a driver to print, and from there it took over and did the grunt work. One driver! Not fifty! Yay!

But there were people who insisted this wouldn't do. I remember WordPerfect ported their word processor from DOS to Windows and it had its own printer spooler system. You printed, it handed it off to the spooler (their own stuff) which rendered the print job and then I think it handed it off to Windows to finally print. I hated having to troubleshoot wonky print jobs that would get stuck in their rendering engine or crashed because of some weird bug in their print system.

How does this tie to my previous post, aside from the fact that my experience with this WordPerfect architecture abomination left me with a bad taste in my mouth for other programs trying to go rebel with dazzling me in programmer cleverness?

Because I'm not entirely sure how the program at hand was actually working (the program I was just troubleshooting). The application had to deal with software that was really meant to run on "old iron"...mainframe type stuff. Basically this program was to take output from another application mean for human resources stuff and render, format, and print that to a printer system (confused yet?). To put it in simple terms I'm working on software that was a port...an evolutionary step with roots in mainframe computers. Non-graphical, probably. The culture behind such designs is one of "I don't wanna change and you can't make me unless I absolutely must," so all the software with roots in IBM mainframe-type systems definitely shows the heritage.

So maybe there's a good reason they chose to do something weird with the printing.

Maybe their programmers were just more comfortable playing with the file contents directly. Maybe they had source code on hand and just wanted to port the software to a Windows graphical interface, and never quite left that mentality.

But please, for the love of whatever deity you subscribe to...please stop trying to fight what few usable features Windows has. Or any operating system on which you're developing. They are there to make life a little easier for you and the end user, and it seems that almost every time someone comes up with some clever enhancement to the way things "should" be done, something ends up breaking and your clever little whiz-bang improvement becomes a detriment to the end user.

My user experience has been horrible with such schemes. Of course Carl would defend his decisions in the way he chose to code his contributions to the application...he's invested in the product. Others at the company he works in are equally invested. It seems that eventually developers are so close to their "baby" that any user having issues with things like this become regarded as morons or it's just a case that the user's computer is misconfigured; their system is an improvement on the crappy Windows way of doing things, after all! It's obviously my fault.

Developers totally lose perspective of what it's like for their users. Whether Carl had good reason or not for how and why this was implemented...aside from the fact that I was working to troubleshoot the issue as a system administrator, the reasons that he and his team did this don't really matter to me. I'm the customer. I just wanted to use the product. He can justify the design by all their meetings or directives or "it works for me" or even the voices in the heads of the development team. I don't care. What I care about is that I need those print jobs working and they're not.

Which means my user experience is leaving me with a really bad taste in my mouth for your product as well as your company name.

I had another issue a few years back with a certain point-of-sale company while consulting for a cafeteria. The company that was initially decided to get the contract...not my choice...put in a product that was experiencing issues with transactions to the database being unreliable. While troubleshooting I found that they were using Microsoft Access as the back end.

We were feeding data from about ten registers in different locations to this central point-of-sale system. And they were feeding it into MS Access. I'm not a database programmer or architect, but I never heard of this being a good idea.

The vendor wanted us to keep throwing hardware at the problem. It was obviously because we were short on RAM, our switches weren't configured to handle the traffic reliably, even power fluctuations were causing issues so they were having us put line filters and UPS's on various systems and equipment before finally someone...like yours truly...wrote up a nice long report on why the bottleneck was the way they built their system and there was no fix without replacing it. A replacement vendor was using a database based on Advantage, and later MS SQL server, and for the most part issues disappeared.

That vendor has a permanent black mark for making what, in my experiences, was a very stupid design decision and then trying to shove the problem onto us. Bad customer experience.

So keep these things in mind if you're offering a product and service. It can take a very very long time to make a loyal customer but only one bad experience to make an enemy who is very willing to share their opinions of your product with others looking to spend a lot of bucks on a product in your industry...

Sunday, June 14, 2009

Ubuntu Sound Problem

While I have had a lot of luck and positive experiences with Linux as opposed to the most popular OS in the market place, there are issues that make Linux...less than perfect.

I have a recurring issue on my workstation where it just loses sound for no apparent reason. I don't know if it's tied into using Flash from websites, but it's through that that I tend to notice it (for example, Youtube suddenly plays videos without any sound). This is an issue that I can't imagine your average home user having the patience to deal with.

Then again, I'm not a patient person.

The solution I managed to find online without having to restart the computer (although, I admit, I don't remember if just logging out and back in fixes it or not...but that's annoying too) involved the following steps:

sudo killall pulseaudio
sudo alsa force-reload

I also made sure that I exited from Firefox and killed any instances of Firefox that were still running in the background. For some reason there seemed to be a ghost of Firefox in the process list and if something was still running attached ot the sound control processes then it would not reset properly. The first time I tried this solution it didn't work; I found an invisible Firefox running, killed it, and retried it then launched Firefox and the Youtube videos had sound restored.

Weird. And very, very annoying.

Friday, June 12, 2009

DD and Netcat to Clone Hard Disks

Normally when I need to clone out a hard disk image to a set of systems I have success using my RIP Linux CD's to boot up and mount a remote share and save an image using Partimage, a utility that creates a compressed file with the contents of a hard disk partition saved to a remote system. I can then copy it down to the other systems limited only by my network bandwidth.

But some systems get indigestion from this. They don't like RIP, or they don't like Partimage for reasons only the silicon gods fully comprehend.

We have a few systems like this that I have to work with. At times like this, I use two basic utilities together to make a literal block-for-block copy of the hard disk to the other disk over the network.

You need two machines identical in hardware.

The source disk must be the same size or smaller than the target disk on which you want to place the image.

Boot RIP (or any other Linux bootable liveCD, needed if RIP won't work on that system or can't see the network card or some other goofy anomaly). From a command prompt on the target system, run

nc -l -p 7000 | dd of=/dev/(drive)

OR

nc -l -p 7000 | pv | dd of=/dev/(drive)

On the source system which has the working installation, run

dd if=/dev/(drive) | nc (ip of target system) 7000 -q 10

Okay...some explanations.

(drive) means the hard drive. Probably /dev/hda or /dev/sda...use dmesg to find what drives are detected during bootup. And you want the drive, not the partition (such as /dev/hda1, which is the first partition on /dev/hda).

(ip of target system) is the IP address of the system on which you want an image put. You get that on your target machine after booting it up and getting networking working...ifconfig should tell you that information.

dd is a command that will read the raw disk device. Handy, but dangerous! Make sure you are careful...one slip of if (input file) vs. of (output file) and you could screw up your working system with an image of a blank drive...use with care!

nc is netcat. It's like a swiss army knife of network tools in that it can allow you to do a variety of functions over the network. Here it's acting as a conduit for sending information from point A to point B. You're telling it to use port 7000, and on one machine it's listening for a connection (-l) and the other it's waiting for 10 seconds after data ceases coming in before quitting, to make sure data is flushed and it's not a connection glitch (-q).

The pv command is optional. It lets you "view pipeline" (you're piping data from one command to another) so you can see how much progress is being made. Ordinarily when you're doing this it looks like nothing is happening...the systems just sit there. I have to watch and see if the drive LED's are blinking on the front of the case to see if there's a connection hiccup or there's still activity. PV tells you in realtime what is going on. I should also mention that while I found that RIP has pv, it depends on the livecd you choose to use as to whether or not this particular application is included on your disc.

There are variations to this setup...I am only giving the simplest form. For example you can stick gzip in there to zip data on the sender and unzip at the receiver to, theoretically, speed it up a bit. Remember dd is reading every block on the drive, whether there's data to be sent or not, so this can take hours to do. But its gonna read every single block, fragmented, no matter the file system or partitioning. It's reliable but takes awhile.

I also give no guarantees. I'm telling you what works for me. If you screw up your system...how many posts have I put up about backups already?? Also this is a totally free solution. There are commercial cloning programs out there that can run hundreds or thousands of dollars depending on how much you need to do. This one takes elbow grease because it doesn't automate a lot. For example, if systems are set up on a domain network I have to remove the source machine from the domain first, then clone it, boot windows, change the system's name, use newsid to give it a new ID (although adding to the domain should fix that already), then put them both back on the domain. Some commercial offerings I believe automate this.

The upshot is that once you understand the process it's as flexible as a circus acrobat. You can, for example, mount a remote share (or use Netcat) to copy an image of the disk to another computer for storage with dd. Use zip (or 7zip) to compress it down. Takes a few hours of downtime, but you'll have an image from which you can restore your computer later, and best of all it's a block-by-block copy...suffice it to say that the significance of this is that some filesystems allow for little tricks to "hide" files (like, say, viruses) on your disk where you can't see them without special tools or some software (like certain CAD software) will do funky stuff with the boot sector or areas of the drive to prevent you from pirating their software. DD doesn't care. It'll copy the raw data so you'd be able to restore those programs (except the malware...the malware part you'd hopefully get a clean copy before it happens.) The possibilities are far too many to enumerate here. I'll leave it to your sysadmin's imagination.

Thursday, June 11, 2009

Beware Internet Explorer 8

Any major update should be approached with trepidation...but according to this site, there are some issues that can render your system inoperable if you go through with the update to Internet Explorer 8 from Internet Explorer 7.

Have a backup handy first!

He has a link to a tool that his friend at the computer repair shop has used to revert back to IE7, but when I went to the site to grab a copy of the utility I got an error that the user had hit his download limit on that host...

The problem as his friend described it: users upgrade to IE 8 and on reboot you have nothing but wallpaper showing. No icons, nothing usable even in safe mode. He said that he had to use a bootable utility to revert to a restore point, at which point the machine would boot but have USB problems and no ability to connect to the Internet.

The utility rolled back the machine to IE6, and on two of the three machine upgrading again to IE7 worked as expected while that third machine still had USB problems so he had to roll it back down to IE6 until another fix can be found.

Ouch.

Just a bit of warning, that's all. Not too surprising since Internet Explorer has tentacles extending so far into the operating system that when it gets screwed up your whole system can be screwed up (or made vulnerable to malicious attacks)...another reason to use Firefox. Firefox is not tightly integrated with your system so if it gets screwed up then typically it's just your web browser, not your entire operating system, that gets hosed. For me it's nice that it's cross-platform, too. I use Windows, Linux, and OS X and Firefox is available on each of those platforms.

Wednesday, June 10, 2009

Antivirus Design and Usability

Developers of software tend to be surprisingly out of touch with users. Even technical ones like me get thrown for a loop sometimes.

Set aside the problems I have with antivirus in general...eat up resources, give users false sense of security, are a band-aid and not a fix (although users feel otherwise)...and there's still one area that some AV software falls short in but could fix. Just being "user friendly".

I just had a user contact us about a message of a virus on her computer. I checked on her system and sure enough, the software we're using for virus protection had a window popped up saying she had "JS.shellcode.AD" infecting her system.

The name tells me that it was most likely a browse-by attack of javascript downloaded to her cache. In other words, probably minor...antivirus software likes to make the littlest things appear on par with nuclear disaster, probably because it helps justify cost from their users in keeping a subscription renewed. But again that's part of another issue.

My problem was that this notification window told me, quote,
"Killing Method: Not Removed."

Huh? Is that like saying you killed a bug by letting it run away?

"File Access: denied."

Does that mean the antivirus was denied access, or the AV software is denying the user access to protect them?

"Proposed method: open file"

Are you suggesting it to me as an action to take? Or are you telling me what the user was trying to do? Because as a program continuously monitoring system activity, chances are pretty good you caught the problem because the system was trying to open a file that it didn't like. I assume when reading about an accident that a car struck another car because of something stemming from driving, not because the driver was flying around and landed on the other vehicle. So why are you reporting it to me unless this was actually trying to tell me something else?

The wording just plain sucks.

I then tried finding the file in question to see if I could delete it...after all, the AV is saying that it was "Not Removed". I couldn't find it.

I tried browsing from a remote computer into the root share. Windows nowadays likes to dynamically reformat the way it presents information to the user to "protect" them. While I understand the (simple) concept of a directory and file structure and have no problem navigating to folder X in Y inside Z to find file A, Windows will hide certain folders and combine them together in Explorer. For example, your Temporary Internet files are actually a series of subfolders with names like EROF43D. When you view your temporary files in Explorer on the local machine, you see a huge list of cookies and cached files in one big list. If you pull them up on a remote computer, you can actually navigate into individual cache directories with goofy names to find what the machine is actually seeing (or if you boot with a Linux boot disk you can actually navigate the folders the way they really are).

I hate Windows hiding this crap.

OS X's Finder does something similar to make the disk more "user friendly".

Anyway, browsed to that location. The file wasn't there.

Huh?

Is it quarantined? Sometimes AV software will take a file it can't "fix" and put it into a "protected" folder, so you don't access it again but can, theoretically, restore it if it was a false positive. But the error the AV popped up with and the log in the program didn't say anything about moving or quarantining the file.

ARGH!

In the course of repairing a second "virus infected" system here, I copied some tools from the Sysinternals Suite (free! Wonderful tools for sysadmins!) from a network share to the local system to help with some diagnosis. That same antivirus programmed deemed one of the applications to be a threat. And deleted it.

I tried copying remotely over to that system. The AV deleted it.

@#$#$!@# piece of @#!

This same system on which the AV protected me so vigilantly still has problems appearing...among them Virtumondo (remember the problem with them? Or at least the suspected problem? Yup, that same system...) and was confirmed with Spybot Search and Destroy. Undetected by the antivirus. Thank you so much! It had hit one small component while leaving other parts active from registry! Yay!

What's my point? My point in this particular post, aside from the side trips into Rantville, is that I wouldn't have been quite so frustrated had the messages been clear and the ability to work on the system isn't thwarted by the interface. The antivirus started it...but Windows also has some of this built in by trying to be user-friendly with barriers to actually getting to the problems to work on it. I have to find ways to work "around" the friendliness just to get the job done!

I mean, c'mon...who thought it would be a good idea to tell you the "kill method" on an infected file is "not removed"? That's not a method. That could be an action taken, but then why can't I find the #$$% file afterwards? You obviously did something to the file in question! WHERE IS IT!?

What happens when you throw these types of obstacles in front of the users? You're being counterproductive. Like I said above, I spend time finding ways to work around these issues when in reality the developers should just fix the problems. Read this blog entry from a developer back in '05...users will find ways to make it usable even if it means simply not using your product (and in the process screwing themselves over or breaking your viciously stupid policies). It also fosters the attitude of resenting your company, your product, or your department, depending on whether you're a vendor or an IT department in charge of helping the user.

You can't make all users happy and I won't pretend you can. My problem is that I am a technically inclined person and one of the people usually called on to help sort out the issues users have when they don't want to or cannot figure out why their computer isn't working...and you're making it hard for me to work with your products. That is crossing a usability line. Would you purchase another car from a company after finding that your mechanic can't work on it or that he can work on it, but because of the way the company designed the engine it takes your mechanic an extra three or four hours (with an hourly fee to go with it) to do the job that on another brand would have taken one-third the time?

Usability testing...look into it.

Tuesday, June 9, 2009

Virus Hunt: Trojan.downloader-54811

Windows...bleh.

I had a call about a system where the user had a notebook at home and said that something popped up with the word virus on it, and whenever she opened a web browser it would just close back out.

A little vague, but that's par for the course.

I had it brought into the work area and booted the laptop with the latest version of RIP Linux (I wanted to use the network to get tools and repair information, but you should NEVER plug a system...especially a Windows system...into the network if it's suspected of being infected with something. NEVER. Booting RIP from a CD bypasses whatever is on the hard disk, mitigating the risk).

RIP has two antivirus tools available if you're connected to the network...which you'd need to be anyway to get the latest definitions...xfprot (a front end to FProt) and ClamAV. I ran both and they both only buzzed an alarm on once file hidden in c:\windows\system32 called __c00BBCE1.dat, telling me it was infected with "trojan.downloader-54811."

Well, good that only one file was triggering an alarm. The fact that it was a downloader meant it was probably something from the web browser and was some kind of hidden component of malware meant to act as a "hook" to download more malicious crap in the background of the user's system. Marvelous.

Today viruses are meant to take over your computer. Whenever a vendor of an antivirus finds a signature to combat the specific "virus" the malware author changes a few small details and re-releases the malware into the wild until the vendor finds a sample and analyzes it and comes out with a new signature then the cycle repeats. Thus seeing another "trojan.downloader" is like seeing another piece of trash along the freeway. Not a big surprise.

A Google search turned up very little, probably because different vendors classify viruses under different names and because there's just so many of them that are ever-so-slightly changed that it's rediculous. Imagine taking a copy of Huckleberry Finn and changing three words in the fifteenth paragraph of chapter 4 and having a whole new book published because of it...that's the way it is with viruses.

Since I'm scanning under Linux I opened a terminal and navigated to the file and ran the "Strings" utility on it, which, oddly enough, looks for strings of words in a file. One stuck out: a call to find the DNS address of zappoworld.com. A google search for that name yielded a hit on a blog detailing one guy's efforts to get rid of some malware apparently called "Virtumondo". He actually had two posts: one here and one here chronicling the fun he was having.

While I can't verify that he and I were fighting the same fight the description was eerily similar in what little detail I bothered gathering to this point.

I agree with his assement...he'd most likely have to reformat and reinstall to be sure the "infection" is completely gone. Once a system is compromised you don't know what it could be hiding. Most users overlook this idea and figure that it can't be that bad for them; they just want it back in a "usable" state and are happy with that. If they don't mind the idea that something is recording their emails as they type them...their passwords...etc. and then sending them over the Internet to organized crime scum in other countries periodically then I suppose that's their choice.

So if you found this post from a Google of this particular trojan downloader's name you have two choices. The first, the one I recommend, is wipe your computer and reinstall from scratch and restore your personal data from a backup. Hopefully a backup from before the infection (this is why I don't normally do system-level backups on my personal computer...I copy my *data*, my personal files and folders, and want a clean set of system files from a fresh install in case a system is infected with something wonky or gets corrupted. I'd be restoring the same problems I'm trying to solve!). The second is to start downloading the latest antivirus definitions, maybe a bootable disc or two with AV tools like RIP Linux, along with Spybot Search-and-Destroy, Adaware from Lavasoft, etc., and prepare to spend a weekend and a half searching and scanning and rebooting and erasing and lather-rinse-repeat until your computer is supposedly "clean", keeping in mind that true stealth malware will have hooks into your operating system that will cloak the processes that may even simply reinfect your system the moment you reboot.

I strongly recommend the first method. It's right up there with using a Mac instead or Linux.

Friday, June 5, 2009

Backup Plans

Many businesses don't give enough consideration to backup plans. This seems to be the case in most organically grown small businesses...doctor offices, small retailers and private businesses, or larger organizations that don't see IT as worth investing a lot of time and funding.

I was just dealing with an issue the other day where someone was dealing with database corruption issues. The server itself should be getting backed up...the machine is remotely hosted, but we don't have paperwork or verification that it is or how far back the backups can be restored, that I know of. But the part that got me thinking about this was that part of the process involved reading data from a USB drive that was inserted for the local application to access.

In passing I asked him if there was a backup of the data on it. He looked at me quizzically..."Don't you have one?"

"Um...no. Not to my knowledge." Which is bad. If my boss is hit by a truck there's not much chance of being able to recover from this kind of data loss as I don't know if there's information recorded anywhere about procedures or backups or master copies of this particular batch of information.

Because it's an end user dealing with the information it just never occurred to him that USB drives have a finite lifespan and, while usually more reliable than floppy disks, they will experience failures. More often than not, in my experience, users don't think about these things until it ends up happening to them...and even the ones that do often don't take steps to prevent it from happening again.

I immediately copied the data to another drive and burned a master CD of the information for them to store.

Perhaps there another attitude that permeates peoples minds as well. The above is really a side effect of the fact that users aren't system administrators; they don't want the responsibility, they don't take the time to do it. If something goes wrong, they'll just blame someone else for it. They're focused on their own primary job functions or skillsets. I get that.

But I also saw this on a mailing list for REALbasic not too long ago. A common issue for programmers using RB is that they'll use an older version of the software because a particular product they created or function they need is deprecated in newer versions or they need to test compatibility. One user needed an old version that wasn't installed on his computer anymore, and because of mixups and missteps there was a delay of a few days until REALSoftware and the developer cleared up issues and the user got an archived version of the programming software.

To be clear...REAL had no responsibility, and they don't claim to have such a responsibility, to their customers for supplying old versions of their software.

The user complained quite bitterly that he lost money because of the delay and was vocal about slamming REAL for the mixup. Others in the list asked why he didn't have the copy of the installer on hand.

Perhaps the issue was one of saving face but many other users chimed in saying that they had CD's burned of old installers and their projects. "What happens if you can't get the old version?" "Why aren't you making backups of backups if your livelihood depends on having it available??"

These were programmers. People who sit at the keyboards hours on end, with paychecks depending on their systems and software working reliably. And still many don't have quality backup plans in place.

Even at home I have three drives copying my personal data, usually once a day. When I went in to surgery I sent a message to a computer-savvy and trusted friend with a general outline of where my data is kept and how to get into certain accounts. Should something go wrong quite a bit of banking information and accounts and such were accessible to him, and I told my wife to contact him and he'd assist her with getting my stored data for her. Is it a perfect backup strategy? No...if I have a fire at the house, my backups will be fried unless I'm very lucky. I'm working on a new strategy at some point in the near future, though. At the moment my backup strategy is really just a way of surviving hard disk failure, or very short term "Oh crap I deleted what??" recovery.

If you are a vendor, a consultant, even a home user...ask yourself this:
  • What information or tasks do I use my computer for that I can't go even a week without? A day? An hour?
  • If my computer dies or is stolen, how can I get back up and running?
  • How long can I work without my computer?
  • How much money will I lose every hour (or day) if my computer or system isn't available?
  • How safe is my information if there's a fire in my home, will my backups (if there are any) be safe?
  • If I can restore information, how much information will I lose?
  • Is my information worth more than the cost of creating an effective backup strategy?
  • Do I have information documented in case something happens to me? Are others depending on the data?
  • How upset am I going to be if my computer is damaged, data lost, stolen, etc.?
These are some things to get you thinking about the implications of a backup strategy. Perhaps, if you answered the last question with "Not really upset at all"...backups aren't something you need. But if you stand to lose a paycheck over losing your computer for a day you really need to figure out (or pay someone to help you figure out) how to recover and get back up and running in the event of the unthinkable.

Find someone to help you cover your bases, to think of things that you wouldn't think of as a non-administrator...do you need redundancy? How far back should backups go? A day? A week? A month? Decide what data is crucial to your business and whether it's just the data or also the functionality that is necessary. Come up with strategies to get your business up and running again in the event of catastrophic failure.

Being able to recover from data loss with only a couple hours' worth of data loss is meaningless if you lose everything when the building burns down, just like a backup is useless when a user comes to you after realizing they need a report back they had deleted a week ago and your idea of a backup is a RAID array.

Wednesday, June 3, 2009

An Ubuntu Essential: Wicd

As I mentioned before I am using Ubuntu Linux on my primary machine right now. Since I had moved to version 8, there was some glitch with networking where it wouldn't allow me to hold a static IP...after posting to a mailing list for help and Googling around the conclusion generally came down to, "Network Manager sucks. Replace it."

With what?

I hesitate to replace default system components because the more you goof with that, the harder it can be to get help (Ever troubleshoot something for an hour to realize that the instructions aren't working because you tweaked some miscellaneous thing three months ago and the fix happens to rely on you having the default configuration? It's another reason that OS X delivers a consistent and generally user-friendly experience while Linux distributions are so diverse that it's hard to give a simple "default" set of instructions for helping newbies...).

But at the same time I couldn't have my workstation periodically altering it's address on the network; I have some network management in place that relied on the machine being consistently found on a particular address.

But eventually I bit the bullet and installed the application recommended by others...Wicd. The instructions were fairly simple to follow: add the repo to your list, enable it, get the GPG key and install that from the command line, then just do an install from Synaptic (or use apt-get from the terminal), both of which are standard in Ubuntu for management of your installed programs. Then you're done.

Even better, this program is such a popular alternative to the Network Manager that with the release of Jaunty (version 9.04) it turned out Wicd is in the Universe repository. Just enable the Universe repos, update, and tell Synaptic to install Wicd.

The installation will automatically remove Network Manager and replace it. I've had zero networking problems since swapping it out. If you're running Ubuntu, and from anecdotal evidence if you're running Ubuntu on a notebook and need wireless connectivity to work more reliably, check out the link to Wicd's installation page. Installation isn't hard and it'll save you quite a bit of swearing...believe me...