In a recent ACM Queue article, George Neville-Neil writes that the ubiquitous socket API, now almost 30 years old, has not kept up with recent progress in networks, and would need a major overhaul.
Sockets are the primary means of network communication. So much so, writes George Neville-Neil in a recent ACM Queue article,
Whither Sockets?, that developers take the limitations of this almost 30-year API for granted:
The sockets API was first released as part of the 4.1c BSD operating system in 1982. While there are longer-lived APIs—for example, those dealing with Unix file I/O—it is quite impressive for an API to have remained in use and largely unchanged for 27 years. The only major update to the sockets API has been the extension of ancillary routines to accommodate the larger addresses used by IPv6.2...
The Internet and the networking world in general have changed in very significant ways since the sockets API was first developed, but in many ways the API has had the effect of narrowing the way in which developers think about and write networked applications.
Neville-Neil explains that expanded network topology, changes in latency and bandwidth, along with the availability of concurrent processing on most systems today would require updates to the Socket API:
The two biggest differences between the networks of 1982 and 2009 are topology and speed... The round-trip time between two machines on a local area network was measured in tens of milliseconds, and between systems over the Internet in hundreds of milliseconds, depending of course on location and the number of hops a packet would be subjected to when being routed between machines... Most computers had a single connection to a local area network; the LAN was connected to a primitive router that might have a few connections to other LANs and a single connection to the Internet.
The popularity of the socket API, says Neville-Neil, resulted in the client-server model emerging as the dominant network programming model:
The model of distributed programming that came to be most popularized by the sockets API was the client/server model, in which there is a server and a set of clients. The clients send messages to the server to ask it to do work on their behalf, wait for the server to do the work requested, and at some later point receive an answer. This model of computing is now so ubiquitous it is often the only model with which many software engineers are familiar. At the time it was designed, however, it was seen as a way of extending the Unix file I/O model over a computer network...
That ubiquity has held back the development of alternative or enhanced APIs that could help programmers develop other types of distributed programs...
The rest of the article explores three areas that, according to Neville-Neil, are not served well by the socket API:
Three disparate areas of networking are not well served by the sockets API: low-latency or realtime applications; high-bandwidth applications; and multihomed systems—that is, those with multiple network interfaces.
Neville-Neil explores proposed solutions to these areas in the concluding section of his article.
Do you agree with Neville-Neil that the socket API is due for an overhaul?
>Do you agree with Neville-Neil that the socket API is due for an overhaul?
He doesn't seem to know about epoll or kqueue, POSIX aio, zero-copy network drivers (looks like Linux has been able to do this since at least 2001 for some cards, other OSes would almost certainly have this as well for at least as long), UDP, multicast, the various P2P apps that use -- you guessed it -- TCP and the existing Berkley Sockets API, etc.
I think the most important change did not happen to networks, but to network programmers. A lot of people writing networked software these days will not even know the sockets API, but use some middleware that handles all the "low level" stuff. (The results are not always pretty.)
On the other hand, you can indeed write scalabele, robust, high-throughput, low-latency network software using the socket api and some system-specific extensions like epoll() and sendfile(). But this is far from easy. Obviously, you need a good understanding of the whole software stack involved, good programming skills and good tools. This will not be changed by the introduction of a new API.
Unless he intends to overthrow the entirety of TCP/IP as the dominant network protocol, I don't see sockets going away any time soon. A connection, in the TCP/IP universe, is a port/IP address pair on each end of the connection.
I could see a new API built on top of sockets that might make P2P programming simpler, or enable wide spread parallel processing of activities via UDP broadcast mechanics. But I am not sure how you replace sockets in the TCP/IP universe as sockets are just a logical abstraction of the 4 pieces of information that define a TCP connection.
Even UDP is not that much different. Years ago when we wanted to use UDP for a network product that needed more than 64K ports, we coded it over UDP and then added another layer that basically emulated TCP by prioritizing, reassembling, and retransmitting packets to ensure reliable delivery.
Like I said, I can see some new API built on top of sockets maybe taking off and assisting programmers but I can't see it replacing sockets.
Of course, maybe my imagination is just too limited but I'd certainly need to be convinced rather than just told.