The Artima Developer Community
Sponsored Link

Weblogs Forum
Is Static Typing a Form of Bad Coupling?

74 replies on 5 pages. Most recent reply: Apr 23, 2006 10:52 AM by Isaac Gouy

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 74 replies on 5 pages [ « | 1 2 3 4 5 | » ]
James Watson

Posts: 2024
Nickname: watson
Registered: Sep, 2005

Re: New refactoring type Posted: Apr 12, 2006 7:15 AM
Reply to this message Reply
Advertisement
The premise of 'types as coupling' is that if I want to change a commonly used type to some other type (like the return value of an often used method) is apparently that you would have to go through and modify all the declarations of the type whereas in Python, you could use a type an provide the necessary operations and everything would work.

There's something to that ability of Python, however, my experience is that this kind of change is rare. What's more likely is that you want to change the behavior and maybe the public interface of the often used type.

In that case, typing is not a hinerance but a huge help. When you work with legacy code, being able to see how something is used is crucial. I would be lost without it right now. The main reason is that the code base I work with has very poor abstraction but I'm resigned to the belief that a clean code-base is a fairy-tale and I will never see one.

Marcin Kowalczyk

Posts: 40
Nickname: qrczak
Registered: Oct, 2004

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 9:05 AM
Reply to this message Reply
> For example in a numerical API an add method is ambiguous for:
>
> d = 1.5
> i = 2
> i.add( d )
> 

> Does the above produce: 3.5 or 3 or Error.

3.5. This is true in all dynamically typed languages I know of (with + in place of add), so it's unambiguous.

Marcin Kowalczyk

Posts: 40
Nickname: qrczak
Registered: Oct, 2004

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 9:18 AM
Reply to this message Reply
> 1. I find I really need a declaration line, otherwise it
> is too easy to make a silly typo and that the typo
> declares a new variable instead of throwing an error.

In other words, definition of a new variable should be distinguished from assignment to an existing variable, and accessing non-existing variables should be a static error.

I agree. And this is perfectly compatible with dynamic typing.

Dick Ford

Posts: 149
Nickname: roybatty
Registered: Sep, 2003

Take a look at Dylan Posted: Apr 12, 2006 9:27 AM
Reply to this message Reply
And especially the open sourced, windows-only, Functional Objects compiler. The compiler outputs a whole bunch of interesting information on how much type inferencing it was able to do. You can then play with the optional type declarations to see what more was optimized.

Dylan is an interesting language, in that is is basically an infix-notation Lisp, and in the case of Functional Objects, could be compiled down to very efficient machine code. It was too verbose in some ways for me that had nothing to do with mandating type declarations, but interesting none the less.

But as I believe was aluded to, even in dynamic languages via something like PyPy's RPython (I think it's called) you could run your program and then have all that information spit out, because at the end of the day Ruby, Java, and Python are all strongly-typed languages.

Tommy McGuire

Posts: 1
Nickname: mcguire
Registered: Apr, 2006

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 10:36 AM
Reply to this message Reply
Java, even Java with generics, has a very, very poor type system. I would suggest you actually try using ML or one of its relatives before trying to come up with your own suggested fixes. (Objective Caml is a pretty good example---it is a good system to use and does not have a lot of other weird semantic issues.) Their type systems are far from perfect, but they do demonstrate some of the possiblities.

Static typing is not a good thing because it makes lawyers happy. It's not a good thing because otherwise suits won't give you money. It's not even a good thing "because it would be nice to check some things at compile time." (Heck, it's not even a good thing for performance or any of the other reasons frequently given.) It's good because it makes development easier. It's good because it reduces the number of things you have to think about. It's good because "flimsiness" costs, during development.

When you write, "When we litter a program with type annotations, we're tightly binding an error detection scheme to the form of the program," you are missing something. Static typing, done correctly, isn't an error detection scheme. It isn't something that you add to a program to make it more palatable. It is a description of some of the logical properties that you claim your program has. You know, like contracts, except that you don't have to write a large number of tests to try to verify that the contract is maintained. Now, all type systems short of full formal verification tools are going to put limits on the properties you can express, and poor type systems like Java's do a very good job of hiding the whole properties thing as well as being far too limiting.

I'm going to make a wild-ass guess and suggest that you have never added "-Wall" to the CFLAGS of a moderately large project built using gcc. The -Wall option to gcc just adds a few checks for things that have been problematic, kind of like lint and kind of like "use warnings" in Perl. The result, for a moderately large project not written from the ground up using -Wall, is typically tens of thousands of warnings. The easiest way to fix those warnings? Remove "-Wall". After all, the program already works, right? Having done that experiment seems to make soft typing systems (which is what you are proposing) a lot less appealing.

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 10:51 AM
Reply to this message Reply
> Java, even Java with generics, has a very, very poor type
> system. I would suggest you actually try using ML or one
> of its relatives before trying to come up with your own
> suggested fixes. (Objective Caml is a pretty good
> example---it is a good system to use and does not have a
> lot of other weird semantic issues.) Their type systems
> are far from perfect, but they do demonstrate some of the
> possiblities.
>
> Static typing is not a good thing because it makes lawyers
> happy. It's not a good thing because otherwise suits
> won't give you money. It's not even a good thing "because
> it would be nice to check some things at compile time."
> (Heck, it's not even a good thing for performance or any
> ny of the other reasons frequently given.) It's good
> because it makes development easier. It's good because it
> reduces the number of things you have to think about.
> It's good because "flimsiness" costs, during
> g development.
>
> When you write, "When we litter a program with type
> annotations, we're tightly binding an error detection
> scheme to the form of the program," you are missing
> something. Static typing, done correctly, isn't an error
> detection scheme. It isn't something that you add to a
> program to make it more palatable. It is a description of
> some of the logical properties that you claim your program
> has. You know, like contracts, except that you don't have
> to write a large number of tests to try to verify that the
> contract is maintained. Now, all type systems short of
> full formal verification tools are going to put limits on
> the properties you can express, and poor type systems like
> Java's do a very good job of hiding the whole properties
> thing as well as being far too limiting.
>
> I'm going to make a wild-ass guess and suggest that you
> have never added "-Wall" to the CFLAGS of a moderately
> large project built using gcc. The -Wall option to gcc
> just adds a few checks for things that have been
> problematic, kind of like lint and kind of like "use
> warnings" in Perl. The result, for a moderately large
> project not written from the ground up using -Wall, is
> typically tens of thousands of warnings. The easiest way
> to fix those warnings? Remove "-Wall". After all, the
> program already works, right? Having done that experiment
> seems to make soft typing systems (which is what you are
> proposing) a lot less appealing.

I think you think you know me better than you do.

Greg Wilson

Posts: 6
Nickname: gvwilson
Registered: Oct, 2005

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 11:13 AM
Reply to this message Reply
It's increasingly common to delay type decisions in C++ using traits classes:

template<class Traits>
class Worker {
Traits::Foo first;
Traits::Bar second(Traits::Giz third) {
...
}
}

and then:

class SomeTypes {
typedef unsigned int Foo;
typedef double Bar;
typedef double Giz;
}

and stuff it in. Of course, you don't have untyped variables to begin with ;-)

p.s. if you'd like to see these ideas get into a real
language, they've started thinking about Python 3000:

http://mail.python.org/pipermail/python-3000/
http://mail.python.org/pipermail/python-dev/2006-March/062644.html
http://www.python.org/dev/peps/pep-3000/

Leo Lipelis

Posts: 111
Nickname: aeoo
Registered: Apr, 2006

Re: New refactoring type Posted: Apr 12, 2006 12:53 PM
Reply to this message Reply
Ruby code ==> Java code
or, more generally
dynamic language ==> static language

:-)

Also, I'm not sure why I'd ever not want to see the type info in code I'm reading? I can understand not wanting to type it all, but looking at variables and not knowing their types never strikes me as useful.


It's useful when you want to have a function like this:


def f(a, b)
return a + b;
end


that works for any two arguments that understand addition. This is a trivial example, but there are a lot of patterns that apply to arguments of many different types. Java's generics are super-verbose and super-ugly and do not present a good solution to my mind.

It's also useful when you want to be able to hot-load and hot-unload code without downtime.

It's also useful because it helps eliminate compile cycle, making development into an interactive question/answer session, rather than a lengthy monologue followed by compile, followed by a test, followed by another monologue and so on. Dynamic typing provides a natural environment for having a dialogue with your code (potentially even while it's running). Sure, it can be done in Java too, but it is anything but natural. Statically typed languages tend to sprout powerful IDEs, but it's not a feature to my mind -- it's simply a compensation for the monologue-like nature of development in a statically typed language. If your development is a dialogue, then maybe you don't really need a refactoring IDE.

So, yea, specifically not having a static type attached to a variable seems like a very useful property to me. I can imagine how in some cases knowing the type of something can make one more narrowminded too. On the other hand, not having a type suggesting the function to your mind can help you be more open minded w/ regard to your approach. For example, is it a web page? Is it a function? Maybe it's good to forget the difference between a web page and a function. Maybe it's not different.

I'm not going to say that static typing is 100% useless, mind you. That's not the point. In this post I'm using Java only as an example. Static types can be very useful.

damien morton

Posts: 15
Nickname: dmost
Registered: Jan, 2004

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 1:15 PM
Reply to this message Reply
Bill Venners:
What I don't understand about how optional typing would work is if the types aren't everywhere, then exactly what do they mean? For example, if I must declare types for function args, but need not for local variables, does that mean I can't use a local variable as a parameter in a call to a function?...

You dont have to delcare any types at all, not for local variables, function parameters or anything.

...Because the local variable doesn't have a type, but the function arg does. That seems draconian, but if you say you can use a local variable to pass a value to a function, then that means the value need not be the specified type. Which means the type means nothing I can count on. How does that work?

Im just a beginner at Haskell, but type inference is really quite cool. At the lowest level, programs work with literals, which have known types. When I say x = 1, I can pretty much assume that x is of type int. Well, lets more broadly assume its an integer, without making any assumptions about the number of bits involved.

So I do a pass over my program looking for literals being assigned to untyped variables or being used with untyped function paramaters or whatever, and insert type annotations based on the types of the litterals being used. Then I do another pass, looking for untyped variables being assigned typed variables, and I annotated the untyped variable with type annotations. Rinse and repeat. If you get conflicting types, thats an error.

The basic idea is that, starting with the types of litterals plus whatever type annotations are offered by the programmer, it is possible to flow the types through expressions and function calls (like dataflow, but with types instead of data), hopefully building up a fully typed program. PyPy and Psycho do this kind of stuff in their Python compilation process, thoughthey fall back to runtime type checks and dispatch where their inference process falters. (Not really clear on what properties of Haskell ensure that inference doesnt falter like it does in PyPy/Psycho).

consider:
x = 1; // this is an integer
y = 2.0; // this is a floating point number
z = x + y; // result of int+float is float
foo(x,y); // foo is foo(int,float);
foo(y,y); // hmm, foo is now foo(float,float) which is a superset of foo(int,float), so we go with this one
foo("hi",y); // ERROR: foo(string,float) conflicts with foo(int, float) AND with foo(float,float)


Theres no runtime type checking - the absent type declarations must all be filled in by the compiler in the inference process, otherwise the compilation fails.

Vesa Karvonen

Posts: 116
Nickname: vkarvone
Registered: Jun, 2004

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 1:59 PM
Reply to this message Reply
I think that the state of the art is that your language ends up looking like ML or Haskell [...]

Whether or not that is true (I don't think that it is), I don't see why that should be an issue.

It just doesn't fly well with subtype polymorphism.

WTF? Whether or not type inference works well (or well enough) with subtyping (several languages have been developed that provide both ML-style type inference and subtyping and combine them in some form (e.g. see O'Caml)), I think that you are, well, looking for something, out of habit, that you really don't need. Subtype polymorphism is neither a necessary nor a sufficient ingredient for programming (whether in the small or in the large).

Outside of a few select examples on extremely well understood domains or contrived examples, I've never seen large (both deep & wide) inheritance hierarchies that I would not classify as poor spaghetti design. I think that inheritance/subtyping is a red herring.

I've read quite a few books on OOA/OOD and it seems to me that the more pragmatic the authors are (or the more real world experience the authors have) the less emphasis there is on inheritance (or other early OOisms like "modelling the real world"). In fact, the books on OOD whose techniques and principles I've actually found to be useful decidedly show ways to create (more) flexible designs through use of various forms of composition other than inheritance. Principles like DIP and OCP really have nothing to do with OO much less with inheritance or subtyping.

In mainstream OO-languages inheritance is everywhere. In fact, in some OO languages the only way to define a new type is to define a new class (or some other kind of construct that is syntactic sugar for a particular form of a class), which may inherit some other class(es) or may be inherited by other classes. Other fundamental ways to construct types may not be provided at all (or are provided in a broken legacy form like union in C++). Let me elaborate on this.

In a typical OO language, to define a product of two types T and U, you write a class that has two members, one of type T and other of type U (or use the dreaded multiple inheritance):

class Product_Of_T_And_U {T t; U u; /* constructors and other necessary things elided */}


In a typical OO language, to define a sum of T and U, you write three classes: one abstract base class, one derived class for type T and one derived for type U.

class Either_T_Or_U {/* abstract methods elided */}
class T extends Either_T_Or_U {/* methods elided */}
class U extends Either_T_Or_U {/* methods elided */}


In a typical OO language, to define an interface I for a concept that has an unbounded set of different implementations {A0, A1, ...} that need to be mixed in run-time (you need dynamic dispatch), but when there is no fundamental reason to have a hierarchy of such concepts (there surprisingly rarely is), you define an interface (or an abstract class) and several implementations:

interface I { I foo(I, I); /* a silly method specification just for exposition */}
class A0 implements I {/* methods elided */}
class A1 implements I {/* methods elided */}
/* ... */


It can be really difficult to do much anything in those languages without thinking about inheritance in one form or another.

Perhaps it is just me, but I think that this may actually be one of the main reasons why so many people have great difficulty grasping functional languages like ML and Haskell that do not provide inheritance, but do provide other forms of type constructors. If the only type constructor you understand is inheritance, and you've been trained to do literally everything with inheritance (even when it really doesn't make sense), then it is probably quite difficult to understand how anything non-trivial could be expressed without inheritance.

In a typical functional language, to define a product of two types T and U, you use the product type constructor. For example, in SML you would write:


type product_of_t_and_u = t * u


In a typical functional language, to define a sum of two types T and U, you define a new algebraic datatype. In SML, you would write:


datatype either_t_or_u = T of t | U of u


(Actually, you could also just use the general purpose sum type constructor either.)

In a typical functional language without subtyping or inheritance, to define an interface I and an unbounded set of implementations {A0, A1, ...} you could simply define a (recursive) record of functions and functions that construct values of type I. In SML, you would write:


datatype I = I of {foo: I * I -> I /* parallels the Java example */}
fun A0 (* constructor args elided *) = let (*privates*) in I {foo = (* implementation elided *) } end
fun A1 (* constructor args elided *) = let (*privates*) in I {foo = (* implementation elided *) } end
(* ... *)


(This isn't the only way to achieve the same effect in SML and other functional languages support other techniques (existentials or dynamic typing).)

Even though I've probably still written most code in my life in OO languages (most of it in C++) and consider myself quite capable and knowledgeable in OOD, I really don't miss inheritance or subtyping (per se) all that much when I program in functional languages like Scheme or SML.

In summary, what I'm saying here is that inheritance and subtyping are (still) grossly overrated. You really can do practical large scale programming without inheritance and subtyping.

Bill Venners

Posts: 2284
Nickname: bv
Registered: Jan, 2002

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 3:00 PM
Reply to this message Reply
> Im just a beginner at Haskell, but type inference is
> really quite cool. At the lowest level, programs work with
> literals, which have known types. When I say x =
> 1
, I can pretty much assume that x is of type int.
> Well, lets more broadly assume its an integer, without
> making any assumptions about the number of bits involved.
>
Looks like I'm going to be learning Haskell soon time permitting, because the local Silicon Valley Patterns group is going to have a track on it. So after that I may be better able to understand the issues.

But I think there is a bit of confusion in this discussion about what I mean by optional versus implicit typing. Let me clarify what I mean, and you can correct me if I'm not using the terms correctly. To me, implicit typing means that, as in Java, every variable and expression has a type known at compile time (or say, at the source code level), but, unlike Java, I as the programmer don't always have to explicitly say what it is. A type can be inferred from context, as in the example you provide. The language defines the rules for type inference, and then that allows me to express a fully statically typed program with less verbosity.

Optional typing to me means that you may or may not define a type, not just write it out, but actually define it. In an optional typing language, therefore, if you leave the type declaration out, the variable actually doesn't have a type. That variable can hold onto anything, as in Python.

For example, given:

j = 1

In a type inferencing language, j would get the type int. An alternative way to say it would be explicit:

int j = 1

But in both cases, j is defined to be type int. If you next tried to assign a string to it, it wouldn't compile:

j = "won't compile"

In an optional typing language (I don't know of one of these, but Guido's discussion of doing this in Python is I would think an example of the concept), however, if I say:

j = 1

The j variable doesn't actually have a type, other than object. It can hold a reference to anything, which means later on I could quite happily assign a string to it:

j = "this does work"

But if I were to add the optional type declaration, as in:

int j = 1

After that, j does have a type, and I couldn't assign a string to it.

j = "this won't fly, because j is an int"

It occurs to me that Java is actually kind of an optionally typed language in the sense that I can use reflection to invoke a method that matches a signature. So Java kind of has optional typing, but the ugly verbose syntax I have to go through to do things the reflection way really points me in the static typing way. Perhaps what Michael and Guido and you are talking about is starting with a dynamically typed approach to quicken the development, then later as things grow you can add type information in to give the program some maintainability benefits of static typing if the program actually does get big.

One problem I see is that I'm not sure how much of those static typing analysis benefits you get if static typing is optional. If the whole program isn't statically typed, I'm not sure the benefit is worth the cost of programming in a static style. Take refactoring for example. If I change a method signature in Java, then my IDE can go out and find all places that call the method that need to be updated, and it can help me update it. Or if I do it by hand, I can change the signature, compile, and the compiler will give me a do list of places I need to go fix. If a method is being invoked in the dynamic way, via method signature matching from an untyped variable, the tool can't be sure it is a place to change. Because at runtime you could be invoking a method with the same signature but in a different class. And in Java, in fact, niether the refactoring IDE nor the compiler can usually help me find places where the method was being invoked via reflection, places which I broke by changing the method signature.

Lastly, I made this point earlier, the freedom you get with the dynamic approach is you can change types at runtime. In Rails, for example, they instantiate a controller object for each incoming HTTP request, and populate it at that time with one field for each POST or GET parameter. Each request kind of gets its own class. It is very powerful, but there is no mention of these dynamically created classes at the source code level. So you can't add types to them later. They don't exist in the source. So if you're planning on doing that later, you might avoid doing the metaprogramming stuff in the dynamic language, which means you'll be missing out on half the benefit of that language.

Vesa Karvonen

Posts: 116
Nickname: vkarvone
Registered: Jun, 2004

Re: Is Static Typing a Form of Bad Coupling? Posted: Apr 12, 2006 3:15 PM
Reply to this message Reply
One problem I see is that I'm not sure how much of those static typing analysis benefits you get if static typing is optional. If the whole program isn't statically typed

Indeed. You are not the only one who sees that as the main problem:

http://lambda-the-ultimate.org/node/1311#comment-15150

Isaac Gouy

Posts: 527
Nickname: igouy
Registered: Jul, 2003

Re: New refactoring type Posted: Apr 12, 2006 5:06 PM
Reply to this message Reply
Leo Lipelis wrote
> Statically typed
> languages tend to sprout powerful IDEs, but it's not a
> feature to my mind -- it's simply a compensation for the
> monologue-like nature of development in a statically typed
> language. If your development is a dialogue, then maybe
> you don't really need a refactoring IDE.

I've heard that Lisp IDEs were exceptionally powerful, and I know that powerful Smalltalk IDEs were extended with unit-test and refactoring capabilities when Java programmers had nothing more than a code editor.

Which ought to suggest that dynamically typed languages tend to sprout powerful IDEs :-)


> I can imagine how in some cases knowing the type
> of something can make one more narrowminded too. On the
> other hand, not having a type suggesting the function to
> your mind can help you be more open minded w/ regard to
> your approach. For example, is it a web page? Is it a
> function? Maybe it's good to forget the difference
> between a web page and a function. Maybe it's not
> different.

Maybe the program will run, maybe it won't :-)

Leo Lipelis

Posts: 111
Nickname: aeoo
Registered: Apr, 2006

Re: New refactoring type Posted: Apr 12, 2006 6:51 PM
Reply to this message Reply
> Leo Lipelis wrote
> I've heard that Lisp IDEs were exceptionally powerful, and
> I know that powerful Smalltalk IDEs were extended with
> unit-test and refactoring capabilities when Java
> programmers had nothing more than a code editor.
>
> Which ought to suggest that dynamically typed languages
> tend to sprout powerful IDEs :-)

Ok, point taken. But I still can't get rid of the feeling that a lot of niceties in IDEs are just coverups for the base platform deficiencies. For example, compare getter/setter generation in Java+Eclipse (yuck) vs. free getter/setters in Ruby (yum). Generated code is hard to maintain, even with good IDE support.

> > For example, is it a web page? Is it a
> > function? Maybe it's good to forget the difference
> > between a web page and a function. Maybe it's not
> > different.
>
> Maybe the program will run, maybe it won't :-)

No compiler I'm aware of will guarantee the program will run.

----------------

To go back on topic, my personal opinion is that we are already drowning in way too many configuration files. If we make it optional, it would still suck. I don't want to type two files when one would suffice. Further, some forms of decoupling are really degenerate. Some things really belong together and should be coupled. One shouldn't be a zealot about decoupling, in my opinion. For example, variable definition and variable use should be as close as possible. But someone might insist that variable definitions should go into a separate file, in order to "decouple" it. That way you get a dictionary of all namespaces and variables used in the program. Nice dictionary, eh? Not to me. I prefer to see definitions and usage together to increase readability of the functional usage. I don't care if there is a dictionary or not, because I don't read dictionaries. In real life I don't sit down to read a dictionary. I read a book and occasionally I use the dictionary to look something up. So the bulk of the time is spent on reading functional usage (or declarative "usage", in constraint based languages).

The whole point of decoupling is to allow change of one piece of code without affecting another piece of code in any way. If you decoupled static types into their own config file, what would be the point? How could you change that kind of config file without messing up the entire project? I mean, if you go into some static type config file and change string to int, that's it, you broke it. Because it's not a piece of information that will be meaningfully changed on its own, there is no benefit in decoupling it. On the other hand, by moving type information away from your functional usage context you make it harder to read the code without some fancy IDE. So you pay the readability penalty and gain nothing in return.

Now I can see that you may use such static type config file to slowly introduce static types without the intent of ever changing them or removing them (both changing and removing would break the rest of the source, thus invalidating the whole point of decoupling, which is to allow change without breakage). But even in that case, in order to add something to this config file, you'd have to crack open your class source code, side by side, and look at it, while typing in the config file. That sucks. Of course you can make the IDE cover up this deficiency. However, in my useless opinion, IDE should never be used to cover up flaws. In other words, the language should work as closely to perfect as possible with as few tools as possible. Each new tool should only add usability and should not be used in alleviative capacity.

So, unless someone convinces me otherwise, I'm against the idea of "decoupling" types from their functional usage.

Bill Venners

Posts: 2284
Nickname: bv
Registered: Jan, 2002

Re: New refactoring type Posted: Apr 12, 2006 7:11 PM
Reply to this message Reply
> So, unless someone convinces me otherwise, I'm against the
> idea of "decoupling" types from their functional usage.
>
Perhaps Gilad Bracha can convince you. I found this paper via a link someone posted earlier in this topic to a discussion on lambda the ultimate:

http://pico.vub.ac.be/~wdmeuter/RDL04/papers/Bracha.pdf

It is short and easy to understand, so take a look. He defines optional typing as something that doesn't affect the runtime semantics of the program, and points out that in that case, you could plug in more than one type system. He also claims that you can enjoy most of the analysis benefits of statically typed languages via this optional, pluggable approach.

His conclusion:

The dichotomy between statically typed and dynamically typed languages is false and counterproductive. The real dichotomy is between mandatory and optional type systems.

A new synthesis that combines most of the advantages of both static and dynamic typing is both possible and useful, based on the notions of optional and pluggable type systems. In summary:

1. Mandatory typing causes significant engineering problems
2. Mandatory typing actually undermines security
3. Types should be optional: runtime semantics must not depend on the static type system
4. Type systems should be pluggable: multiple type system for different needs

Flat View: This topic has 74 replies on 5 pages [ « | 1  2  3  4  5 | » ]
Topic: Is Static Typing a Form of Bad Coupling? Previous Topic   Next Topic Topic: Python seeks mentors and students for Google Summer of Code

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use