This post originated from an RSS feed registered with Ruby Buzz
by Vincent Foley.
Original Post: The age-old interpreted vs compiled language debate
Feed Title: Vincent Foley-Bourgon
Feed URL: http://www.livejournal.com/~gnuvince/data/rss
Feed Description: Vincent Foley-Bourgon - LiveJournal.com
I went to see my good friend Anne-Marie at her school today, and she mentionned that she asked her husband if the company he worked for needed any programmers. He said that they always need programmers, so from what I understood, he went and viewed my journal and followed a link to the Ruby website and he said something along the lines of "Oh, an interpreted language..." Doesn't it bother anyone else that in this day and age, programmers still believe the old myths about programming languages?
What's an interpreted language? Well, it's nothing really, because it doesn't exist really. A language is basically just a mix of syntax and semantics to tell a computer how to do things. Now, a language can have an interpreted implementantion, like Ruby, where the code is interpreted at run time. There's nothing about the Ruby language that makes it interpreted, it's just the way matz wrote the implementation.
My real problem is when people think that interpreted languages are just toy languages and that compiled languages are the real thing. That is so far from the truth! Mission-critical apps have traditionally been written in compiled languages, yes, such as COBOL, FORTRAN, C or C++. However, they were used not because they were better, but because they were faster! When computers are slow, you have to do everything you can to make your application fast and usable. However, we now have computers more powerful than all the computers used to send man on the Moon mixed together! So we can afford a little more high level stuff, and I think it's the high level stuff that "scares" some programmers.
Let's take a simple example, applying a change to every element of an array. In a language like C, the code looks pretty much like this:
/* I'm not even sure that the array declaration is valid *?
int[] arr = {1, 2, 3, 4, 5, 6, 7, 8, 9};
int i;
for (i = 0; i < 9; i++)
arr[i] *= 10;
Now, maybe it's because I'm weird, but that's a lot of code to do a rather trivial task. Let's look at the equivalent Ruby code:
That's shorter isn't it? And if you know both languages it's definitly clearer, and even if you don't, you must admit that the code is easier to read. Would somebody say that the second snippet is "toyish"? Hardly I think, since C# and Java are coming up with closures, the proponants of those languages would probably see that the second version is better, more abstract than the first.
So, why does it matter whether a language is interpreted or compiled? I don't think it matters much, C# and Java are not languages which compile to native code like C or C++, they compile to byte-code which is then interpreted by their respective VM's.
You can't consider a language a toy language just because it's interpreted in my opinion. You need to look deeper than that. A programming language which has a lot of documentation, a lot of third-party modules, books written on it, production applications written with it is no toy in my book. That's a real programming language, and Ruby is just that, a real programming language.
For toy languages, see Befunge, unlambda, brainf*ck, whitespace, etc. I could be mean and include Java in that list too ;)