The Artima Developer Community
Sponsored Link

Articles Forum
Gathering Scattered I/O

2 replies on 1 page. Most recent reply: Sep 18, 2007 11:30 AM by Frank Sommers

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 2 replies on 1 page
Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

Gathering Scattered I/O Posted: Sep 18, 2007 11:30 AM
Reply to this message Reply
Advertisement
Have your cake and eat it, too, with STL extensions. In this chapter extract from his latest book, Matthew Wilson shows you how to take full advantage of the STL Iterator abstraction, without sacrificing block-transfer efficiency of Scatter/Gather I/O memory.

http://www.artima.com/cppsource/scattered_io.html

What do you think of Wilson's take on Scatter/Gather I/O?


Daniel Berger

Posts: 1383
Nickname: djberg96
Registered: Sep, 2004

Re: Gathering Scattered I/O in C++ Posted: Sep 27, 2007 3:47 PM
Reply to this message Reply
Isn't this somewhat built into the Windows API now (since Windows 2000) with ReadFileScatter() and WriteFileGather()?

giovanni deretta

Posts: 1
Nickname: gpderetta
Registered: Sep, 2007

Re: Gathering Scattered I/O in C++ Posted: Sep 28, 2007 12:48 PM
Reply to this message Reply
std::deque is usually implemented exactly like a scatter/gather buffer range: a vector of pointers to (fixed size) buffers, but still it manages to get
good performance.

A known trick is to use segmented iterators (See the paper "Segmented Iterators and Hierarchical Algorithms" by Matt Austern): a special iterator protocol is used to segment a range in a ranges of ranges, defined by simpler iterators. Such protocol can be used by standard algorithms to iterate more efficiently over hierarchical data structures like, for example, deques. This way you have maximum efficiency without sacrificing transparency: algorithms and containers are still completely orthogonal. Also note that if implemented correctly you can have more than one level of segmentation.

Unfortunately this protocol is not made available to the users by standard libraries implement it, so users cannot take advantage of it for their own containers. Coupled with the fact that you can't use a deque for vectored IO (there is no way to know where the contiguous buffers begins and end, i.e. you do not have access to the protocol itself), it means that the user must roll its own solution.

I've implemented my own deque with segmented iterators and wrappers for some common standard algorithms that take advantage of this property.

I've used successfully such a deque with s/g io: my own I/O abstraction was capable of being fed with any container type and used traits classes to detect whether the container was capable of direct I/O (for example a plain char array, or a std::vector<char>) or had segmented iterators over direct I/O subranges (my own deque).

The framework was very flexible and most importantly, I/O was completely decoupled from the buffer abstraction. It could be easly extended to work with any third party container.

BTW, my deque is a random access range, thus all blocks must be of same size. But if you weaken the requirements to bidirectional range, you can have an hibrid list/vector like structure that behaves like a bidirectional range, whose block view support splice, and its segmented iterators support random access within the block. Someday I'll get around implementing it. It should be easier than a deque.

You can find my library at libstream.sourceforge.net. I haven't been working on it for ages (my current work is not network programming unfortunately), but someday I will clean it up.

gpd

Flat View: This topic has 2 replies on 1 page
Topic: Java EE 6 Previous Topic   Next Topic Topic: Do Frameworks and APIs Limit Developers' Imagination?

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use