Friday, July 31, 2009

What Would You Do With a 3D Printer?

Most people probably aren't aware that these things exist. If you're not a tech-person, I'd still encourage you to try reading through this post because sprinkled in it are several little tidbits you might find interesting (or not...feel free to leave a comment discussing it...)

Generally speaking (from the perspective of the home market) the most familiar printing technology takes paper in a tray and deposits or layers some form of ink or plastic onto the paper to form images and words. Think laser printers and inkjet printers.

A second "printing" technology (using the term loosely) is used with CNC machines; most often today associated with Computer Aided Design, a computer creates a design that is then exported to devices that automate a series of actions on a material to produce an end product. For example, you create a cool logo and sent it to the CNC machine with a piece of wood which then uses a router to create the logo in the block of wood. On TV I've seen laser engravers and cutters that slice pieces of metal into components of motorcycles and such. The technology is actually far older than implied here but it wasn't until minicomputers became affordable that the automation of building tools became cheap enough and integrated enough that, say, your average high school shop could afford to integrate CNC-type devices into their computer aided design courses.

The third printing technology limping its way onto the scene is 3D printing. Instead of using inks or toner and paper, 3D printers use a polymer (plastic) goo that is extruded...pushed...through a grill of some sort and solidified using lasers or heat or some other technique and then builds, layer by layer, slice by slice, an object or component.

There are commercial 3D printers like those from ObJect. They even have a "desktop" printer called the Alaris 30. The drawback? You need $40,000 to get one. Ouch.

Why would someone get one? For the most part, the cost of getting one of these babies means they're limited to businesses that manufacture things so they can create mockups and product models for the most part.

The high price tag and sheer "cool factor" means that there is a growing homebrew market of people creating these things in their garages, like the RepRap made out of Erector Set parts. It's not as streamlined or as accurate as the printers that cost as much as a house, but still, cool stuff.

There's even a guy using a home-made 3D printer ("Fabber", for Fabricator) to create crazy 3D candy using sugar. You can see pictures of the CandyFab here and the article on the project is found here.

There's a Fab@Home wiki project that is dedicated to homebrewers working on making their own home-made 3D fabricators (printers).

The question is, what will happen when or if these things come to the home?

There is already work being done to retrofit inkjet printers so that instead of ink, you can print circuits, referred to as Printable Circuit Boards. Why would you want to do that? Because you can print things like cellphones. Yes, paper cellphones. You print it, fold it, use it, and when it wears out, you throw it away (just what we need...the ultimate in customer convenience trumping environmental waste...). Advances in flexible conductors and materials allow printing displays, ID systems, batteries, muscles...all sorts of neat things. Unfortunately these advances are so niche that I doubt you're going to see them in the home until there's a kind of convergence in the technology (and the price barrier is reduced substantially).

But none of these is quite like 3D Fabbing. Creating your own three dimensional sculptures? Or parts that you can assemble like a model, or your own radio control car? There is definitely a market for this type of device...imagine purchasing something from Amazon and instead of shipping it, you sit for half an hour and it pops out of your printer next to you.

My thought is that if homes get cheap 3D fabricating technology you're going to have a lot of people making pornographic scenes and sculptures for giggles. It fits...advances that made today's modern computer and graphics technology, as well as a significant driver in getting the Internet into the home, were video games and pornography (another article on this is here). I've heard it said that if Star Trek's Holodeck technology were to become reality it would be the end of civilization...no one would want reality when they can have indistinguishable virtual reality in which to, um...yeah.

What do you think? What would you do with a fabber/printer that creates 3D objects, even if it were limited to something like making objects out of sugar or plastic only, or couldn't be a precise high-resolution printer and couldn't create complex machines (can't print a computer out...but you could print parts out and assemble them)?

Monday, July 20, 2009

This Old PC

I had a question on the comments from someone who had read part of the blog asking what I thought could be done with an old computer. To quote in part:
what do I do with an older computer that still works,
but is  a bit behind the times? I have a "DJ" computer that has never been
hooked up  online. I am supposed to be able to use it for music, but it's just
sitting in  my closet, covered up sleeping. It is quite a bit slower than
the stuff out  there today, but works for playback in a Disc Jockey
situation. The drives are  fairly large for music storage, but the speed is slow.
He didn't specify what the specs were or what he'd like to do with it, so I'll assume he's just asking in general what can be done with it.

The difficulty is that it really depends on what you have in place already and what you're interested in doing.

Scenario one: dedicate the machine to a side purpose. I have a very old system that I use only for my iPod. In this particular case I scrounged 512 meg of RAM for it, a spare parts CD burner, and the machine already had a Celeron 2 Ghz processor, which brought it barely on par for running Windows XP without falling asleep while it booted. I installed AVG free edition and iTunes. Then I installed the free edition of VNC (RealVNC.com) so that the system can be remotely viewed from my primary computer (that way I don't have a monitor connected unless I'm troubleshooting something). It's slow as molasses, but it has enough hard disk space for my podcasts and song purchases; I don't need to play with a virtual machine in order to use iTunes with my iPod Touch. It's just a minitower sitting on the floor and for the most part is out of the way.

Scenario two: I have a couple of old systems that I have primarily for parts. This would assume that you have a bit of geek in you and space to waste, but I've used old systems to cannibalize parts for other machines; sometimes that 128 or 256 meg of RAM can make a bit of a difference, or the CD-ROM craps out of an ancient system and the one that now has no memory just happens to have a usable one wasting away.

Scenario three: you have some network in place and could use storage for files. One thing that can be handy to do is create a file server. This is especially nice if you have more than one hard drive in the machine, since it means you can set up something akin to software RAID to keep your information safe. Here old systems for a home network will probably max out their network connection before the processor is most likely going to be overtaxed. If you can, look at ensuring you have two large drives, then download a dedicated software package for turning the machine into a file sharing device; this is called a NAS, or Network Attached Storage. You then can set up the computer, disconnect the monitor and just set the CPU aside with the keyboard and mouse (unless you're lucky enough that the BIOS, the computer's internal configuration manager that displays all those cryptic messages at bootup about memory and setup and boot devices before the operating system comes up, is able to boot without belching errors without the mouse and keyboard...but I usually keep them attached just in case they're needed).

A good place to start with that type of project is the FreeNAS project. There's a Wikipedia article about it as well. Basically I'd look at downloading the liveCD and using an inexpensive flash drive to store configuration information; you set up the BIOS to boot from CD-ROM first and then let the FreeNAS configuration format the drives and handle everything from there. Just make sure you assign it a static IP address so when you reboot each time it has the same location on the network.

The interface lets you configure RAID, encryption, and various services from your web browser on another computer. Handy! There's a configuration guide on their download site.

Fourth scenario: you are curious about Linux and want to try it, but are afraid to try it on your "primary" computer for fear of screwing it up. Old computers are usually capable of running Linux. It may not be great; I mean, popular desktop distributions of Linux usually need at least 512 meg of RAM to work well, but I've had usable configurations at 256 meg of RAM. Also you won't get eye candy if you're using an old or underpowered video card. But if you're just trying to figure out if you'd like to give a distribution a serious try, this is a great way to have a small test run, and if you screw it up then who cares? You wipe and reformat it. On underpowered systems I'd first try out something like Xubuntu; it tends to be a little nicer on limited resources.

Fifth scenario: you have a young one you want to expose to technology. Full disclosure: my youngest is using scavenged, used Macintoshes. It's less prone to viruses, malware, and usually is easier to manage (in OS X you can easily limit the applications displayed in Finder, for example). My four year old loves his Macintosh. But you need at least a G4 with some oomph and at least 512 meg of RAM in order to enjoy any flash-based games and you'd have to make sure it has a DVD drive for them to watch DVD's. I managed to snag one for about $400 from a refurb shop, an old eMac, before finding a G5 system that was going to be recycled from a school. Schools may prove, especially over the summer, to be a source of old systems that are going to be scrapped. You normally have to pay to have systems scrapped out; legally, you can't just throw out old monitors or systems because of toxic metals and chemicals used in making them. Schools typically (legally) have to ship them to recycling centers or pay to have them disposed of, so if a citizen puts out word that they're looking for old systems you may find an old Mac lab being disassembled. Try making friends with a local Intermediate Unit if you're in Pennsylvania or if you're in New York I believe the equivalent is called a BOCES...they have tech people that know most of the tech people in surrounding districts and might be able to help you out.

But I digress...

If you're in possession of an old PC and you have a younger kid who could use a computer, you can refurb it yourself for basic services. What I mean is that if you have 512 meg of RAM, you should be able to install Ubuntu (or Xubuntu), Firefox (so they can use online games and services), Thunderbird for email, and a DVD drive will let them watch movies. They can't play Windows games but it would keep them safer online; old systems with Windows may invite malware, and on old hardware it won't take much to make that system simply slow to an unusable CRAWL. Using Linux means they can still use OpenOffice to type up reports and get basic schoolwork done; if they are technology oriented there are plenty of tools available for free programming as well. They should be able to play music or use cameras for photo uploads also.

If you're tech savvy you can even set up Secure Shell on the system (ssh) so you can remotely log in and help them or remotely perform updates. You might want to put it on a different port since there are scripts trying to poke and prod for vulnerabilities, but if you're comfy with tech (or want to experiment) it is extremely handy to be able to tunnel secure shell for both remote administration purposes or using the "remote desktop" viewing. You just need to configure it on their system and also configure their router to allow access on whatever port you want to use it on (or if it's in your own home's network you need not worry about the router). But this is a different topic.

Sixth scenario: You want to have a software router or firewall. Why would you do this? Well, I know you can most likely make do with the fifty dollar jobbies from Staples for doing most routing (although to be honest I've had horrible luck with a lot of the combo SOHO (Small Office Home Office) devices; these are units that have a switch, router, and wireless function built into one device. But that's another story. To tell you what the PC router can get you is basically more control. You can get charts that track your bandwidth use, tell you what is hitting your connection, control your network usage,...the list goes on. You can also create more custom rules for managing your network and with the right hardware even create what's called a DMZ; this lets you create a connection to a machine that is exposed to the Internet while blocking access to your own personal computers, so if someone hacks that exposed computer then your systems are still safe.

I personally just like having tools for monitoring my network traffic. You'd be surprised what you're actually doing on the Internet.

Like FreeNAS there are dedicated (TINY!) bootable disks made just to do this, for example Smoothwall (this particular distribution has a pay-for version with extra features and a free version). It had the ability to do other things, like run a proxy server to speed up your web browsing.

Similar projects include IPCop, M0n0wall, pfSense, and Shoreline. Some of these don't even need hard disks to work, just a liveboot CD or floppy disk! Just be aware that in order to use a system as a router or firewall you're going to need at least two interfaces; if you're on dialup you'll need a modem and a network card, if you're on DSL or some other high-speed connection you need two network cards. Fortunately they're relatively cheap now...you can get older cards that should be adequate for this purpose for $20 to $40 bucks if you look around.

Seventh scenario: if you already have a Linux system on your network, you can set up a remote boot system. This is not necessarily for the faint-of-heart...it takes some geek cred to do this.

There are two primary ways to start researching this. There are thin clients, and there are diskless systems. To understand the difference you have to think in terms of resources; computers run with displays, memory, and disk space and processors that do the actual work on the system.

Thin clients boot up and connect to a server to run applications. All resources...memory, the processor, disk use...are on the server. Basically if you're on the system that's acting as the server and someone else is using a thin client connected at the same time and they start watching animations from weather.com, you will have a slowdown on the system through no fault of your own. For this reason you need a server that is hefty in available horsepower, but the clients can be ancient PC's that can't even run Windows XP in order to work.

Diskless systems boot up and download their operating system from a central server, then connect up a shared drive over the network and load programs from that system. That means they're sucking down network bandwidth and drive access from a server, but once the program is running, all memory and processor use is on the local system. In other words the client machine doesn't need a hard drive to work, just a good network connection.

The thin client sucks all resources from a hefty server; a diskless system just mooches storage but uses its own memory and processor and display capabilities.

Either way it's easier to manage upgrades and updates from a central server. There are setups for making this easier if it were a project someone wanted to undertake. It may even be useful because it can be a way to roll out workstations to other locations around a home for other family members to get Internet access without getting a lot of expensive computers. Drawbacks...if the server dies, they all die. Also they are using Linux, so if you absolutely must use Windows, you'll need to see if you can virtualize it (which is probably not recommended since it would be resource intensive) or use a different solution.

There are howtos available for ubuntu for both diskless configurations and terminal servers. There are also distributions made for creating terminal server setups. Either way they should give some starting points for researching the topic, and if you have a terminal server then you can have truly ancient hardware that can be usable since they're using another system's resources to run. It may even be faster than what the actual hardware would support on its own!

Last scenario: if you happen to have an old all-in-one system laying around, use it for decorative furniture! Yeah, weird, I know. But still it has some geek appeal. For example, turn it into an aquarium. Or a couch. There are some sites that discuss turning old systems into things like an MP3 stereo or a free digital video recorder for your television, but you'll probably need to invest in slightly more expensive hardware to gain that ability. If you really want to stretch it you could always set it up as a digital picture frame to just cycle through home photos or set it up with some webcams and use it as a home security controller; in general it's not extremely taxing on the system once it's set up.

Those are all the suggestions I normally have for old systems off the top of my head. In general old computers alone aren't worth much...Macintoshes can have some value (even old G5's, PowerPC chip based, can go for several hundred dollars. I just did a search on Amazon and they have a PowerMac G5, dual 2 GHz processor, 512 meg of RAM, 160 gig drive, DVD-R/CD-R drive, for $777.77) despite being rather old, but most PC's of this age bracket would go for maybe a hundred dollars down to free. Legally, you can't even throw them out. Proper disposal usually means finding a recycler that will take it or contacting your local dump and ask them if they can take it or know of a company that can handle the disposal details (or contact your local public school or college's tech department and ask if they take home systems for recycling).

You do need to be aware that sometimes these systems will need some small investment to increase their life...sometimes some memory sticks can boost performance significantly, or it may need a cheap network card added. If you invest more than a hundred bucks it's probably not worth it. Also you may need to be careful of the hard drives...drive failure is a question of when, not if. You can get new drives for fifty to sixty bucks off Amazon.com; I use an external hard disk to make backups of my little iPod system in case there's a system failure.

Hope that helps answer your question! If you have any other questions feel free to ask, and if anyone else has suggestions for using old PC's leave a comment!

Saturday, July 18, 2009

High Availability Through System Redundancy

The title of this post is quite a mouthful. If it didn't scare you away and you're something of a geek at heart though, read on.

If you're a regular follower of the blog you know that I had two types of side projects floating in my head. One was writing a story; the other, creating a saleable program.

I had started on the program but shelved it temporarily to work on some stories starting with The Writing Show's Halloween contest entries. But that didn't mean I stopped thinking about the software aspects of my interests.

I was mulling over what I would want to do for a fledgling business using a web service as it's means of income. Part of that...a big one...was having the systems be available for use as much as possible since downtime can kill your business.

The problems is that bootstrapping a business means budget...low budget. Such a small shoestring that we're talking velcro for the shoes.

Part of that mulling over of the idea is what led to my article a few days ago about RAID. It's not a total solution for high availability. What I want is for a separate computer to be ready on standby to take over if one system craps the bed, so to speak. RAID simply can't protect against a motherboard or controller failure; it's not meant to.

Solutions from big companies...if you deal with technology you've probably heard of them...can solve this problem for you. As long as you have a couple hundred thousand in your account to cover your first year.

But that got me to thinking about Google. Many may not know this, but Google's hardware is all commodity parts. They just built a staggering number of systems and concentrated on tying them together with high speed switches and custom software for the filesystem and data shuffling that occurs on the their systems behind the scenes. The systems themselves aren't all that much different that what you can buy from any computer retailer.

Services can be offered using inexpensive hardware.

I did a little more digging and found the DRBD project. It's a mature Linux project that basically creates a RAID 1 array...mirroring...across the network.

What does that mean? Use case I'd look at. Buy two computers. You can get decent systems for...lets say a thousand dollars. Have a small hard drive and a larger data drive in each one.

On the smaller drive, install Linux. Configure the OS, networking, etc.

On the larger drive you place your data...your web server, your database, etc.

Now install DRBD to "mirror" the large drive on machine 1 to the large drive on machine 2. Using a "primary-primary" configuration, the data drives on the computers are kept in sync with each other.

If there's a failure on the primary one where all your data is being accessed, you can shut down the first one and your data would be intact on the second system, ready to take over. In the abstract, you have two servers ready to work for you in case there's a problem with any component in your server.

Unfortunately there's more to it than that. You can implement this, sure, but that means that if your server dies someone has to be onsite (or able to remotely access) your server 2 so that you can redirect traffic manually to the new system.

To fix that you need heartbeat software. This is software that runs on the two computers and constantly chatters back and forth, usually every few seconds, just to see if the computer's "mate" is still alive. If not, the heartbeat software runs a script that tells the second server to take over, alter it's IP address on the network so it is now the master system, and makes any alterations necessary in order to take over the role of the primary computer.

Of course this would mean that RAIDing a lot of data over the network could take a toll on performance. To that end I think this commodity hardware would need a second gigabit network card to connect the two computers with a crossover cable so they could just talk to each other; a dedicated hotline just to exchange data.

I've also thought about management of the system; easy backup, easy archiving, etc. I've long been a fan of virtualization for this. I can create a generic virtual computer with it's own virtual hard disk device (on your computer it looks like one giant file, when you boot the virtual computer it uses this giant file as if it were a hard disk). Everything is sandboxed and secure on that fake computer.

I've loved it because once I get a computer configured to provide a specific function, for example a web server for an in-house bulletin board system, I can back it up by shutting down the virtual machine and copying the giant drive file to another location then fire up the virtual machine again. We had a system fail and to bring up the virtual server I just copied that backup file to another computer and fired up VMWare on that system. No reconfiguration necessary and minimal downtime.

In other words that generic computer can be copied to another device to run in a pinch. If I didn't have that system virtualized I would have had to reinstall an operating system on a dedicated piece of hardware, configured exactly for that hardware (device drivers are a pain sometimes!) and it would have taken more time and hence more downtime. A virtual machine is the same no matter where you're running it...as long as you have your virtualization software installed and space on that system's drive (and memory) you're good to go.

Hmm,...I thought about it and wondered, what if I ran virtualization software that would mirror the virtual hard drive file using DRBD from computer 1 to computer 2?

VMWare has a solution like that. As long as you're running hardware they approved of with a lot of beef and VMWare's additional enterprise tools...for lots of money...they can give you the ability to sync up and share virtual computers between two or more servers.

The lots of money part would be the tough part, though.

Digging some more...there is a Linux solution called Xen. There are other virtualization solutions under Linux, but this one is rather mature and fast; plus it's coming up with hits in Google coupled with DRBD. Hmm...

Xen should give me the ability to install virtual Linux machines on a server and because it's associated with the Linux High Availability Computing Project, there are scripts and hooks built in to the heartbeat management software and DRBD that will allow me to take two machines running DRBD and when one machine fails have the second one automatically take over the virtual machine. Nice!

This would also simplify things like hardware upgrades or maintenance or other issues. Best of all, aside from the cost of time and hardware, the software to do this is free.

I know that this is all theoretical for me at this point. I've only been exposed to very basic virtualization software and basic hardware for redundancies like RAID cards and the like. I've definitely never done anything like working on a home-grown cluster. Fortunately I'm so far behind on having a working project to sell that this isn't a big issue right now.

Periodically I poke around and see what hardware is available or being thrown out by friends that I could procure for use in creating a very basic testbed, if for nothing else than to test performance on trash hardware. I'm also not fully certain how well any of what I've found so far woudl actually work in the real world; there could be other setups that work better. I'm still reading through material in my copious free time from the clustering and high-availability sites to see if there's some other solution that would match my proposed needs better.

If anyone out there knows more please feel free to leave comments. I'm open to ideas. In the meantime I'm probably going to continue with some writing and researching and eventually continue on with the programming...hopefully...

For others reading this that are techie hobbyists I hope you may have found this at least a little interesting. If you are a techie that has a home "data center" this may be the type of project that could boost your geek cred substantially. It would be cheaper than hardware RAID, more expensive than a big system with software RAID, but properly configured it would be safer than either one because it eliminates the computer itself from being a single point of failure...instead that title would go to your network infrastructure (router and/or switch) or your "datacenter" (if both systems are in your house and you have a housefire...).

Interestingly enough if performance is decent with compression and depending on load you might even be able to eliminate part of that by locating the two servers in separate geographic locations and connecting them with fiber or a dedicated VPN. But again I'd need to test performance for that with some testbed computers.

What do you think? Interesting? Anyone have ideas to share?

Thursday, July 16, 2009

Why is Cross Platform So Hard?

This is a followup to yesterday's blog posting about my trying to choose how to do a cross platform programming environment (or language).

After researching various ways to try creating a cross-platform application I started asking myself, "Why is it so damn hard to do this?"

I think I already knew the answers to this. I just needed to sit and reflect on it a bit.

First, look at the premier language (and frameworks) that already achieve this goal of cross-platform compilation of native applications (there's a tongue twister for you). REALbasic is probably the best choice for "easy" cross platform application development. Here are issues that end up being tradeoffs you make for cross-platform compatibility:

Applications are tied to the lowest common denominator of features across the Mac, Linux, and Windows. If you want a cool Certain interface option or choices (I seem to recall some griping on the support list about not having multi-state check boxes, but I haven't verified it...this would be an example though) are available on one platform but not others. That means that REAL chooses (correctly, in my opinion) not to support it.

There's a certain discomfort many people feel towards tying their business success on the success of a small business. If REAL folds, you need to migrate to a new platform for your product and probably have to rewrite your primary products if you have been gambling a significant percent of your business by basing it on their proprietary product; seeing as REALsoftware is a private company, they have no obligation to tell their customers anything about their financials.

Certain operating system capabilities are only supported if you start digging into special calls particular to the Application Programming Interface; for a newbie, this is a big hurtle. REALbasic does this through the use of special declares or you can pay extra to companies like MonkeyBread Software for a set of plugins that give additional functionality, but using these calls means using pragma declarations to conditionally compile certain code only if you're using a particular platform. In other words using special system calls kills a bit of your cross-platform compatibility.

Another difference I've seen griped about is that when you use cross-platform tools, often the native "look" is lacking because the widgets...the name given to things like the resizing handles and minimize/maximize buttons and such...are implemented by the vendor and not using the native operating system functionality.

Maybe others have suggestions about finding an easy way to do native applications across platforms that doesn't involve creating a web application or using an interpreted language. Maybe others can give corrections. but from what I've found this is the current state of affairs; cross-platform work without being locked in to a particular vendor's product and learning that vendor's implementation of the language, and in the process of using a particular vendor's implementation of the language you also get limited help from peers since there are fewer users using that package then the general pool of BASIC/C/C++/etc. language programmers.

I'm open to suggestions and anecdotes, though...feel free to leave a comment!

Wednesday, July 15, 2009

Cross Platform Development for a Hobby Project

Lately I've had my programming project on the backburner while I worked on the story I had blogged about previously. The road that led to me dipping my toes into the programming waters has been a rather long and winding one and I'm by no means a professional programmer. I'm a painfully green beginner at programming.

When I was thinking about trying to start a new programming project, one of the things I really wanted to address was the question about platform. "Platform" here means what operating system the program runs on...Windows, Linux, OS X, or maybe even the iPhone.

What I really wanted was an application that works on multiple platforms. There are a couple ways to address it.

One, you create the program in a language that is relatively portable, like C/C++. You'd still end up having to maintain two or more code bases, though. This also increases maintenance chores since you can have bugs in one set of code that don't exist in the other and you're probably going to have workarounds for quirks on one platform that aren't in the other. It would be a maintenance headache at best, a nightmare at worst.

Two, find a language and toolset that is cross platform to begin with. This means that you have one codebase to maintain and one IDE in which to program, but the resulting program can target multiple platforms. One of the benchmark IDE's for this type of task was Metrowerks Codewarrior. It used to be a really cool C/C++ best known for running on the Macintosh but was also available on Linux and Windows and it could compile programs to run on Windows, MacOS, and Linux. This was the compiler I was using while in college and I loved it. Through a wonderful series of missteps it ended up turning into an embedded systems tool only, meaning it was...is...used to create programs for microprocessors in things like phones and microcontrollers. In other words

Another cross-platform compiler is REALbasic from REALsoftware. It closely resembles Visual Basic in many aspects but it, like Codewarrior, compile the application for Windows, the current generation of Macs, and certain distros of Linux.

There really aren't too many alternatives for cross-platform development. There are a few tools being worked on like Lazarus, a cross-platform comipler based on the Pascal language, but it's still under development and getting certain features to work properly means playing with add-on libraries or packages.

My biggest issue with this solution is that it requires learning this dialect of the language and then becoming slave to the company the produces the language. For example, I liked CodeWarrior. It used a more-or-less standard language (yes, it had a couple quirks, but no more than expected) and the Standard Template Library for C++. Adapting my coursework to work with that compiler was usually not a problem. Then the company basically dropped all support for it and discontinued the product. As the platforms were updated the code generated by the compiler also fell behind until it just wasn't practical to use anymore.

It also means that these tools have smaller user bases, which means less support and help. REALbasic is really neat; except I'm paying for a product that when I need a code sample I have to turn to a mailing list or forum with a relatively small base of people who spend their time coding and are more or less experts compared to me, and when they are asked for a lot of help with coding they are usually paid as consultants.

They're helpful, but it's just wrong to take advantage of them like that.

Single-platform environments like Microsoft's Visual Studio have books out the wazoo for beginners and experts alike. There are code snippets available for the taking online as well as blogs and sites dedicated to explaining how to approach various programming projects. While REALbasic is close to Visual Basic, it is just different enough that it can be a pain in the arse for someone who isn't already a prolific programmer to port between the two platforms.

Three...you can use a cross-platform VM. This is learning something like Java where you write the source code, compile to bytecode, then it runs on an installed virtual machine (the VM). I hate this type of solution because it means your target must have an installed VM and you're kind of dependent on changes to the VM. In other words, if you create a Java application, you need your client to have the Java software installed, and there's a chance that the next update to Java could break your software. The same goes for Microsoft's .NET-based software.

Four...you use a scripting language like Python to run on multiple platforms. I personally shy away from this option because, like option three, it required your client to have the language support installed (distribute a Python program means you need to have Python installed), and if it's an interpreted language then you probably have the source code being distributed for your client to go through or alter without your control. I'm not a huge advocate of digital rights management, but I am afraid of someone taking advantage of the easy access to the source code to alter something they're not supposed to and breaking the application. There is also some hesitation because interpreted languages tend to have a reputation for being slower and/or more of a memory hog. Some developers dispute this though.

Five, write the application as a web application. If you create the application and test to a certain set of standards then most web browsers should be able to run it. Problems? New web browsers may break the application, you have to deal with issues resulting from proxies and caches and such, and you can't "install" the application; the client must have a usable network application. Also your app probably can't have direct access to much of the user's computer because browsers are sandboxed from the computer for security reasons. There are popular frameworks for developing web application such as Ruby on Rails and Django to simplify the task, however, and for some applications this approach works rather well.

Monday, July 13, 2009

Thinking of RAID on a System?

Over the years I've played with several different RAID configurations in a couple different environments.

Being paranoid about having my system available to me, I've used hardware RAID mirroring (RAID 1) on my desktop while at work I've had the honor of using Dell PERC controllers for RAID 5 arrays (three drives mirrored into one volume).

For a quick overview on RAID, check out this link. Basically it's a way of keeping your computer running if a hard disk fails without losing data.

Only it doesn't always work this way.

See, back when I first started playing with RAID there was debate about software RAID (where drivers in the OS handled the redundancy control) and hardware RAID (this is where you purchased a controller and installed it into your computer so that the operating system...Windows, Linux...wasn't aware of the RAID setup at all other than what brand controller you had; you went into a boot menu to configure your RAID setup, let it do it's thing to the drives, then install your operating system).

The debate was over speed and reliability. Software RAID was evolving. Hardware was more reliable.

Today the landscape has evolved.

In my home systems I've used 3Ware cards for mirroring drives. It added a LOT to the cost of my computers and I've not had any problems with them in use; in fact, the first one I had managed to save a computer that was so old I had relegated it to use for my daughter and as a test server later on. My daughter didn't even realize the drive had failed...I don't know how many weeks (or months) the one drive was crippled before I was in her room and saw an error come up about the volume being degraded.

The biggest con against using them? Cost. Adding these cards can cost hundreds of dollars more on a system. Check out the cost of a 3Ware card sometime to find out.

At work we have a lot of Dell servers with PERC cards. These are rack-mounted servers made for being servers; you would expect them to perform very well. Here's a fun story to share.

We had a server being relied on by hundreds of users as file server (and home directory server). It had RAID 5; this means there are three drives that appear as one volume for redundancy...check out the previous link for more information, but it basically means that one drive goes bad an alarm goes off and the server keeps chugging away with the remaining two drives. You pull the bad drive, install a new one, plug it back in and the RAID card will rebuild the data on the fly. That means you never even turn off the server and the users are never aware of the problem....you literally pull the drive out and switch it out with a new disk and the server is supposed to recover without a problem.

I say supposed to because we had a drive fail and it didn't recover.

There are tools with Dell servers (for Windows) that lets you monitor the status of the RAID array. We put in the new drives, it would start rebuilding, then the process stopped with an error.

After many retries (and anxiety...remember, one more drive dies, you lose the server at that point) we found out using the tools on the RAID card (meaning we had to shut off the server and use the boot-menu on the controller to check the disks at that point) that one of the other drives had one bad block on it that the controller and operating system never flagged as bad.

In other words, we had drives A, B, and C. C failed. We replaced C with a blank drive and in the process of rebuilding data from A and B to drive C discovered that drive B had one bad piece on it that nothing else had noticed until now.

And because of that little problem the array couldn't be rebuilt.

Crap crap crap.

In the end we ended up replacing both B and C then rebuilding the machine with a backup from bare metal...rebuilding the whole server from a backup that was a few days old. Fortunately we lost very little data but did lose quite a bit of time and added a number of upset people who couldn't get to their home directories for a couple days. Ouch!

So anyone telling you hardware RAID is a panacea...they're lying.

Some other considerations regarding RAID...
  1. If a computer breaks (and it's not the drive electronics), you can sometimes get data off by sticking the drive into another computer. Hardware RAID sometimes throws a loop into the mix because it'll add a signature to the drives so that the controller knows when you hot-swap drives which one is which (or even if it's not hot-swap). That means that you can't just pull the drive and stick it into another computer to get data off, unless you have identical hardware available in which to stick the drives in. This is of course true if you have data striped (like in RAID 5); you would think that in data mirroring (RAID 1) you could do this. Not always.
  2. Some systems today come with "hardware RAID" on the motherboard. They're crap. It's not really hardware-based as much as software; it's a chip with some programming on it to handle a kind of pseudo-RAID implementation that offloads the checksumming functions over to your computer's processor. See this article and this one for a few other opinions. These systems with RAID on the motherboard are also referred to as "fake RAID".
  3. "Fake RAID" also can be system-dependent. In other words, if you're running RAID with the onboard RAID chip and the motherboard dies, you lose your data because of the way the drives are formatted to work with that motherboard. You need to replace it with an identical (or fortunately similar) motherboard to access the data again.
  4. RAID is not a backup. It's there to protect access to a system in the event of drive failure. You still need to back up important data to another set of media.
  5. Software RAID in Windows and Linux means you increase your odds of recovering data if there's an issue with the system. Hardware RAID will often tie your drives to that computer. Today the performance of software RAID can meet or exceed hardware performance in many real-world use cases. See this article for some older numbers comparing the performance of (2004) Linux software RAID vs. a 3Ware controller.
  6. RAID means moving a single point of failure from one device to another. People seem to forget that. RAID protects you from a hard disk failure; but if your RAID controller dies, your data dies (unless you have a good backup). The only way to stop that is to have two RAID controllers. Then your motherboard becomes a point of failure.
  7. RAID is useless without monitoring tools so you know if the blasted thing is working. Here is another tradeoff; tools for Linux software RAID tend to still be cryptic, but if you're setting up an administrative environment, you can usually set up tools to email or alert you if ther's a problem. But it's cryptic. Not user friendly. Takes time to learn the ins and outs. Windows software RAID is as cryptic as Windows usually is. Hardware solutions usually means running a proprietary tool specific to your controller from that manufacturer...more vendor lockin. If you want flexibility you'll probably not get the pretty tools.
My next system for home data storage will probably be using software RAID. I just can't justify the cost of hardware RAID anymore.

I have the 3Ware type cards in some of my systems now but have been bitten by an aspect of the proprietary nature of the vendor. I loved the cards, they worked well. But the really old system I mentioned? I recently installed Ubuntu 8.10 on it. I then set about installing the 3Ware monitoring tool, a little web-based thing that gives the graphical status of your card's logs and configuration. I can't get it to work because they don't support the newer kernels; because the card is old (but perfectly usable!) they aren't planning on adding support either. I'd need a brand new card. Which is complete overkill for this old thing.

I've also been bitten while repurposing an old Dell server with a PERC card running FreeBSD and later Linux; I can't get any tools to monitor the RAID status. I have to either reboot to the card's controller or wait until the status light on the front of the server starts going nuts to check on it.

While hardware RAID can give high-end configurations a little bit of an edge in performance and, properly used, a definite edge in monitoring the status with the hardware (it's nice to have the drives physically numbered on the cable ports so you know which is drive 1 when 1 has failed, or have blinking status lights on the controller or disk telling you, "Hey! I'm broken!!"), the extra cost isn't really worth it for most people's situations.

Software RAID is, to be sure, not easy for the beginner (at least not on Linux) to implement as you can imagine from this example. Once running it should be relatively portable, recoverable, and at least usable.

The good news is that these tools continue to evolve and become more friendly. Today's Ubuntu installer includes information at setup on configuring software RAID, making it far easier than before for people who aren't deep in the arcana of system administration to add that kind of support. Okay, maybe it's still not all that friendly. But as a longtime Linux user, believe me when I say it's come leaps and bounds forward in friendliness.

Are there any other points I'm missing that should be included here regarding RAID? Feel free to comment and let me know...

Friday, July 10, 2009

I Would So Totally Go For an Electric Car

There are people who relish in complaining about the Obama administration, and while he hasn't created a Utopia of America his policies apparently are having some beneficial effects on companies working on "green" products. In this case, electric cars.

There's a cool new concept vehicle being created by a company called XP Vehicles, Inc. that is called the Mini Utility Vehicle.

The MUV may be a relatively inexpensive car because it uses 70% fewer parts than the average car.

Yeah. Seventy percent.

Plus it's lightweight. Because it's partly inflatable.

Four passenger, 125 base miles per charge (or you can get a module that will extend the range to 300 miles), totally electric, weights less than 1,400 pounds, top speed of 85 miles per hour and does 0 to 60 in 8.5 seconds.

And did I mention it's partly inflatable?

The seat, dashboard, and internal structure and carrying racks are inflatable or mesh suspension. And with fewer parts that should translate into fewer parts that can fail, lower manufacturing costs and lower maintenance cost. It "refuels" using lightweight removable battery drawers that you take with you to recharge inside your home. The XP has motors built into the rear wheels, and the first cars to market are expected to have two rear hub motors and a motor controller. In other words, no transmission.

The article says that their target market is an age group (29 to 32 year olds) that are getting their first car but they found often can't afford a home, and insurance companies don't want to cover vehicles that need cords running to them. So...drawers of batteries that you can carry easily.

I'm a little leery of the promises before it can be delivered since this article states: "The battery drawer array features racks that can be easily removed from the car and consumer class batteries like those in an iPhone. XP will give consumers the option of taking the battery drawers out of the car and up to your house, apartment or hotel on something similar to little Razor scooters or over your shoulder."

I don't know if they mean that the batteries are like those in the iPhone or if it's a method similar to the iPhone or if the battery cells are iPhone-sized, but in general, don't compare the positives of a battery technology with the iPhone. iPhone batteries aren't replacable by the owner. It's a big sore spot for iPhone owners. Whether it's what you mean or not it's like casually mentioning that some sociopathic serial killer had a beard while telling someone you thought you'd grow a beard. Bad association.

The only trouble I see is that people tend to be...well, sheep. If something is different, something that their neighbors or people in their community aren't using, they won't want to try it. So despite this vehicle using cutting edge tech it will need a big initial wave of positive customer experiences in the introduction to market or it will very much flounder. It will need to hit the ground running with good experiences and very low cost then ride a wave of people seeing benefits of ownership over time to have continued sales. If the car is as inexpensive to buy and maintain over time as the article is promising then I would love to get one.

In the end promises are promises and reality is something else entirely; we'll not know much of anything until it is actually available on the market. Hopefully we'll see something more from XP in the next couple of years...

Friday, July 3, 2009

More on Thinking Before Sending Your Computer Out

Here's another wonderful story that people don't seem to learn from. A teacher was found with child porn on his laptop. He apparently was watching the videos "after hours" at the school.

Let's be perfectly clear: I'm not condoning child porn. Actually this post isn't about porn at all. What it's about is the general idea that someone has data that they probably should have kept private...would you advertise it if you were taking part in a crime? Duh...and instead of taking steps to secure that information, he sent the computer to be repaired without giving any thought to that data.

After working in a repair shop I saw the kinds of things that people had on their computer without thinking about who would see it. Everything from passwords to financial data to (legal) porn. Believe me. Do you really want someone you don't really know suddenly finding out what kinks you're into?

Take a couple minutes to think about these things:
Does your computer have your passwords saved to your bank?
What does your browsing history say about you?
Are you the only one with access to your "family" computer? Do you know what Junior is doing on the computer?
Have you ever audited chat logs?
Email...you aren't one of those people that exchanges racy or risqué emails with friends or immediate family, are you? Because in most cases those emails aren't kept encrypted...
Is your email set to automatically allow logins? Are you getting bills or information sent to those email accounts that you don't want shared?
Anyone send pictures to you or have you stored images on the computer that may be...compromising? Think about this from different perspectives. More and more often teens are finding out the consequences of allowing their party pics to be viewable on the Internet, and job seekers discover that picture of them their friend took barfing into a toilet at an underage party doesn't win brownie points with prospective employers.
Are you sure there's nothing you mind being passed around?
Are you sure all your music and videos on your computer is legal? The RIAA would love to make an example of you if it isn't.

Scarier yet...are you sure that you don't have spyware or malware on your computer that is downloading and sharing illegal material (such as child porn) on your computer without your knowledge??

There are steps that can be taken to minimize risk, such as encrypting your home directory or hard drive. Many people will say that they have nothing to hide or don't care if someone sees their embarrassing photos. That still doesn't explain what's going to happen if malware was transferring illegal material using your computer and that tech at the computer shop discovers it...do you really want to take a chance?