Thursday, February 28, 2008

Web 2.5

Few hours ago I had a conversation with a Sharepoint administrator I know about the new Google Sites service. He told me that their announcement was the talk-of-the-day, and couldn't go unnoticed. We talked about what threat could the new service be for MS Sharepoint or Office at all.
For those of you who are familiar with organization/enterprise oriented portals, the idea of Google Sites is not new: an individual, a team or an entire organization, can build up a site around an idea (a project, a shared point of interest, etc.) simply, quickly and without having to write a single line of HTML. Afterwards, they can add to this sites, lists of objects, such as documents, pictures, spreadsheets, calendars, task-lists, etc. This is done very easily, and with full integration to the user's desktop applications (MS office, OpenOffice, Web-based office, etc.).
Now for the exciting part. Google's product might not be better than the competition (yet) in the organization level. BUT, they are the first to bring such a polished, full-featured product, to the Internet. Now, everyone can create sites around ideas, projects, shared interests, etc. Using the same tools they are familiar with: Google Docs, Picasa, Youtube, RSS feeds and such.
I like to call this new level of services: Web 2.5. Why? Because in Web 2.0, the control over the content of the web, was passed to the individual. Now, the control over the content is passed to group of individuals – collaborating. Imagine how powerful it might be for a team of students, working on the same project, to collaborate using such tool, and share their results with the client (lecturer or a real client, doesn't matter). This is the evolution of Wikis.
Of course there are limitations: 10MB per file, integration with GoogleTalk yet to be made, workflows around documents aren't possible, and a lot more. But hey, we can call it Web 2.49 until then.

Wednesday, February 27, 2008

SysAdmins who write code

At work, under my team's responsibility, there's a dozen or more systems, which have few dozens development environments. This adds up to a large number of servers (with many virtual servers) which runs variety of enterprise software. Managing all this software, requires each sysadmin to be expert in our favorite language: Perl. Also, since all the servers are Windows, some cmd scripting knowledge is required. Rarely, for our internal development, other programming languages are used, such as C# or PL/SQL. Today i read, on the only Microsoft blog I read, that in Windows 2008, sysadmins who write code would have full power. This means that PowerShell will allow us to do what we already do very well, and enable us to do that for MS products (never wrote a Perl script to administer IIS).
According to my experience, this PowerShell technology exists for a year-and-a-half, and yet I haven't written a single PowerShell line of code. As for the MS products we administer (Windows, IIS, etc.), we just don't write complex (more than cmd) scripts for them. I only hear the Exchange guy crying about MS removing functionality from the Exchange administration GUI, and that now in order to do simple stuff, he has to use PowerShell. Brutal marketing that is.
Now don't get me wrong, I'm all in favor of finally being able to control the rest of our systems using scripts. It's just weird that the way to do this was to remove functionality from the GUI and to wake up that late (who knows when we will upgrade to Win2k8 with the new IIS, etc.). I'm all for .NET technologies (I use mono at home), but I don't see how I'm going to replace "du -h" (we use MKS/cygwin) with "get-childitem | measure-object -property length -sum" (taken from wikipedia), especially if one day I'll replace a Windows server with a Linux server, and all my scripts would mean nothing.

Wednesday, January 23, 2008

Hebrew 2.0

Many programmers, administrators, and even users, are wasting a considerable amount of time dealing with issues regarding localization of software. When it comes to Hebrew, this amount of time, doubles itself. For start, Hebrew’s direction (Right-To-Left) is something many software vendors forget in their first versions of a product (take blogger for example), so Hebrew is aligned in the wrong direction. And even when this issue is solved, still there are many quirks. Such quirks can be when mixing Hebrew with English (or other Left-To-Right languages) or mixing with numbers and parenthesis. Other issues involve saving files in the correct code-page, or displaying data in the correct code-page (how many times have you changed the browser’s encoding on misbehaving sites). Sometimes, file formats (such as text) might not include magic headers about the encoding, thus making the client software guess how data should be displayed. Moreover, since there are multiple ways to represent Hebrew (8859-8, 8859-8i, 1255, using UTF-8 etc.), it leads to some conflicts and bad implementations.
My suggestion is a new way to represent Hebrew, at least in the computer software world. It involves a new language, written Left-To-Right, using western characters, or some other characters that are already included in UTF-8 (could possibly be Hebrew characters, though that would be very confusing). I also have a name for this language: Simplified Hebrew. This is the place to mention: I’m not looking to replace the holy-language, I like Hebrew. The words would be the same words, with the same meaning, and the same sound. Only the written language would look differently. Think about it, how many issues could be solved. Adapting software for Hebrew speakers would be like adapting American (US) software, to British (UK). As simple as it gets. Plus, Hebrew speakers won’t have to compromise on their language when they are using computers, since software would be in Hebrew – Simplified Hebrew!

Friday, January 18, 2008

Big Oops

Sometimes, us system administrators, are blamed for loss of information. Such loss could render a system unusable, or just lack of information that is required to be restored from a backup. Recently, my team an I were blamed for at least two coincidences like that. In one of which, the user was the one to delete the information, and in the second, we still don't know what went wrong, but we're sure we had nothing to do with that. On both cases, we restored everything back to normal, at the price of few work days.
Such mistakes merely justifies the fact that users put their blame on us, since these sysadmins really are stupid. It's a shame they bring a bad name for all of us.

Friday, December 28, 2007

Interviews

I have great faith in people. I believe that everyone deserves a chance to prove themselves. But if one fails... that's a different story.
In the past week I've been interviewing computer scientists for various jobs. Usually, I'm looking for those top 20%. It doesn't matter it they excel in programming, algorithmic thinking or other technical topics, only that they excel at something. I don't mind spending time at teaching the material required to successfully fulfill a certain job, only that the person would be one that I'd know I'm not completely wasting my time on. Some of the best computer scientists I know, haven't touched a computer until they started their degree (or even later), the others are programming since they were 10-years-old, and speak Java better than Hebrew. So there is no formula to find the best people for the job. This is why interviews aren't simple, for both sides. It is very important to ask technical questions from different fields of computer science, and also to test the way one is thinking. So the interview might be long, stressing, dynamic and tiresome. But those who made good impression would usually know that, since they'll have this good feeling inside them after leaving the interview. I'm not a bad person, but there are some people who made me (yes, they made me) ask them "So, why did you learn computer science?", in the end of the interview (such question at the beginning is legitimate and makes sense). Think about that.

Saturday, December 22, 2007

Reverse engineering

Every now and then I find myself reading source code of some of the commercial products we use at my office. I won't specify the products' names, but I will mention some of the biggest vendors are in the list. When it comes to open source software, there is nothing wrong with what I'm doing: I'm having a problem, Google doesn't help much, after some time I decide it'll be best to see how the problematic piece of software is written and after some more time, the issue is solved (and a patch suggestion is submitted). But, when it comes to commercial products, the process involves and extra stage: reverse engineering. Also, almost never a code-fix suggestion is sent to the vendor.
So, what do we reverse engineer at our office? Things that don't require much to "reverse", such as Perl and JSP (yes, vendors do supply these) and things that require decompiling, such as Java classes (usually packed inside archives of some sort). Most major software companies use these technologies, so after reverse engineering one's code, it's not difficult to do this for another. The problem begins when trying to understand how the code works, and where is the culprit. No matter if it's Java of Perl, such code is usually unreadable. Usually, few hours or so are required to make progress.
Here are some examples for reasons for what we do:
* Product installation fails with no obvious reason. This is where the vendor blames our environment, and we have to prove it the problems lies at their code.
* Some product's features are failing. See above.
* The product is badly documented, if documented at all. The only way to understand the way it expects inputs or using resources, is by finding the right place in the code.
* Some programming framework (which are usually open source, such as Struts), is misbehaving, or deeper understanding is required.
One final word about this topic: Every professional developer and a system administrator should be able to do such tricks. Otherwise, one will have to rely on answers found in Google, or worse, patches from the vendors, both of which might come too little and too late.

Monday, December 17, 2007

Legacy Horror

Were you ever ordered to debug an application written ages ago? Did the original programmer was still around? Did the users contributed any valuable info besides "it's not our fault" or "the other guy knew how to solve this"? Did you think that the people who wrote software in technologies which are now obsolete, are idiots or incompetent programmers (no offense)?
Well, since I'm writing these questions, you can guess what would be my answers. This week I got to work on an issue, in an application written before the turn of the century, for people who still thinks (!) they should solve every problem with VB and Oracle Forms. Of course the issue was critical, and the blame was on us - the IT and IS departments. After three days of hard working, and the complete waste of time of about 5 software developers, the issue was solved. And still, after solving the problem, we still don't know what caused it. Code just disappeared from the application. We consider it "voodoo", and blame ourselves that we gave the users complete control over the application, so they could destroy it by accident (they'll never admit that, and we cannot prove that).
Some would say I'm too harsh, or mistaken. These things happen, and they might be our complete fault. But I believe our only mistake was to let users continue to work on such legacy application.
What is the life span of software developed in-house? 7 years? 10 years? And what are the costs of prolonging legacy applications' life? which is better: to rewrite using modern methodologies, or to patch the software until computer architecture changes so dramatically, so the software simply wouldn't work?
I wish I had school answers. Now I remember why I hate in-house software development so much. Nothing good comes out of it. And when problems arise, it happens so many years after the software was written, that nobody knows how to solve them. So, the software costs money (and resources) when written, when fixed, when rewritten, and all over again.