bqt at softjar.se
Fri Oct 5 14:49:00 CDT 2007
fre 2007-10-05 klockan 12:00 -0500 skrev "Jerome H. Fine"
<jhfinedp3k at compsys.to>:
> Johnny, I believe that your comments are very clear and
> they address many of the aspects which concern the way
> in which MSCP handles read / write requests in both small
> systems (single user systems like RT-11 and even TSX-PLUS
> since the device driver still handles one request at a time)
> and large systems (such as RSX-11 and especially VMS).
Thank you. And yes, there might be a big difference between systems like
RT-11, and larger ones. I don't know enough of the innards of RT-11
device drivers to tell how it is doing, nor how programs might utlize
> (NOTE that all of the following comments are with respect
> to running programs on a 750 MHz Pentium III with 768 MB
> of RAM using W98SE as the operating system, ATA 100 disk
> drives of 160 GB and Ersatz-11 as the application program
> running a mapped RT-11 monitor, RT11XM. While I have very
> good reason to believe that the same relative results will
> be obtained on a Pentium 4 under WXP, again using Ersatz-11
> running RT-11, I have done almost no testing at this time.
> OBVIOUSLY, comparison with real DEC hardware of a PDP-11
> and a VAX can only be done on a relative basis since HD:
> exists ONLY under Ersatz-11. In addition, since the speed
> of disk I/O on the Pentium III (even more so on a Pentium 4)
> is so much faster (more than 100 times) than the transfer
> rate on a SCSI Qbus or Unibus, the comparison could be very
> misleading since CPU time vs I/O transfer time might become
> much more significant. For just one example, when the BINCOM
> program that runs on a real DEC PDP-11/73 is used to compare
> 2 RT-11 partitions of 32 MB on 2 different ESDI Hitachi hard
> drives (under MSCP emulation with an RQD11-EC controller),
> it takes about the same time (about 240 seconds) to copy
> an RT-11 partition and to compare those same 2 partitions.
> Under Ersatz-11, the copy time is about 2 1/4 seconds and the
> BINCOM time is about 6 1/2 seconds using MSCP device drivers.
> When the HD: device driver is used under Ersatz-11, the times
> are about 1 second for the copy and about 6 seconds for the
> BINCOM - I have not bothered to figure out why the reduction
> is only 1/2 second instead of 1 1/4 seconds.)
There is a big problem with using E11 here, since it queues and
optimizes disk I/O as well, and so does the underlying OS also in the
end. So it is tricky to do much evaluation of the controllers as such.
You basically see what is best under E11.
> However, I believe that my comments on the efficiency of
> using the MSCP device driver under RT-11 vs the efficiency
> of using the HD: device driver probably need to be analysed
> much more closely. The other aspect of the analysis that
> is missing is the efficiency with with Ersatz-11 implements
> the MSCP emulation as opposed to the HD: "emulation". It
> is unlikely, but possible, that Ersatz-11 has much higher
> overhead for MSCP since the interface is so much more
> "intelligent" than the HD: interface only needs to transfer
> the data to the user buffer based on the IOPAGE register
Analysis is always a good thing. And yes, the implementation of the
respective emulation in E11 plays a big part.
> A bit more information may help.
> (a) The HD: device driver can be used BOTH with and without
> interrupts being active after the I/O request is issued.
> It makes no difference under W98SE since the I/O request
> is ALWAYS complete before even one PDP-11 instruction is
> executed. This result also applies to the MSCP device
> driver which I could modify to see if it might make a
> difference in efficiency. However, when I attempt to
> compare the copy of a 32 MB RT-11 partition with HD:,
> the time difference between using interrupts and not
> using interrupts is so negligible that it is almost
> impossible to measure the total time difference to copy
> the 32 MB RT-11 partition using the available PDP-11
> clock which measures in 1/60 of a second. Since there
> are 60 ticks in a second, the accuracy is better than
> 2% over 1 second which seems adequate to determine on
> an overall basis if using interrupts vs no interrupts
> makes a significant difference. Obviously if there is
> no significant time difference at the 2% level (of one
> time tick of 1/60 of a second), then avoiding the extra
> RT-11 code to handle the interrupt does not play a
> major role in the increased efficiency of HD: vs MSCP.
> I conclude that would be the same for MSCP as well.
An interrupt handler that took anywhere near a fraction of 1/60 of a
second is so broken it should be shot.
Basically, you cannot measure anything with a clock of that low
Also, you need to check for I/O completion before doing the next
operation. If you skip that, you will loose. So the question then is: is
it acceptable to be in a tight loop waiting for I/O to complete, or do
you want the machine to be able to do something else meanwhile?
Let us instead look at this from a theoretical point of view.
With the HD: driver, you need to somehow make sure that the previous
operation have completed before you start the next one. This must all be
done in PDP-11 code. You have the choice of either doing it polled,
using a tight loop, or having an interrupt when the device is ready.
Now, having a tight loop will most likely be better, but then your
machine will do nothing else while it is waiting for the previous
operation to complete.
So most likely you will want to use interrupts.
But no matter, we're talking about executing PDP-11 instructions the
whole time here. Either to polled loop, followed by setting up the
registers for the next I/O request, or having an interrupt handler entry
doing the same setup after some sagister saving and so on.
As such, they take a damn much longer than doing native instructions on
the host CPU. This is important.
The host CPU in the HD: case is the machine running the E11 emulator,
while for the MSCP case is either the machine running E11, or the local
CPU on the controller card.
My point here then is: once the previous operation have completed, the
HD: controller must run through some PDP-11 code before the next I/O
operation can start.
With MSCP, the host CPU needs to run though some code before the next
I/O operation can start, while the PDP-11 isn't burdened at all in this
Obviously, the MSCP case is better.
But this is only true, if you queue more than one I/O reuqest to the
MSCP controller. If you don't take advantage of this feature in MSCP,
then your MSCP controller will be the same as the HD: controller, but
with more overhead, since there is more bits to fiddle on the PDP-11
before a new I/O request can start using MSCP. Basically stop and go.
Not efficient at all.
So it's a question of if the device driver takes advantage of this or
And this might be something that RT-11 don't do.
Oh, and no, disk operations under E11 aren't so fast that no
instructions will be executed before the operation completes. However,
with disk caching and tricks inside E11, a few operations might appear
to go that fast, before reality catches up with you.
Disk I/O still takes on the order of milliseconds to complete. Guess how
long one PDP-11 instruction in E11 takes?
If anything, computer speed have advanced much more that disk speeds, so
that even with emulated computers, we now manage to do a *lot* while
waiting for disks.
And already with the trusty old real hardware, we sat around waiting for
disks long times...
> (b) The other aspect is the ability of MSCP to order
> and internally queue I/O requests based on the most
> efficient order for them to be performed, probably
> when there are many requests outstanding and the
> required head movement can be minimized by choosing
> the order in which to execute the requests - which
> thereby increases overall I/O throughput. If I can
> make a suggestion, I respectfully ask what the interface
> between the device driver and the controller (or host
> adapter in the case of SCSI for MSCP - note that ESDI
> controllers are also MSCP) has to do with efficient
> internal queuing of I/O requests. Perhaps my viewpoint
> based on RT-11 is distorted (or TSX-PLUS for that matter
> which uses almost the identical code as RT-11 as far as
> I am aware), but I ask the question. It seems to me
> that a simple (dumb and efficient) interface such as
> HD: is only the final step in instructing the "controller"
> to perform the disk I/O whereas the actual "intelligent"
> aspect is probably going to be in the device driver
> of the respective operating system such as RT-11, TSX-PLUS,
> RSX-11 or VMS. Obviously the "intelligent" portion can
> also be in the actual controller or host adapter, but based
> on my VERY limited understanding of MSCP implementation
> by both DEC and 3rd party MSCP controller and host adapter
> manufactures for both the Qbus and Unibus, all of the
> "intelligence" of internal queuing of I/O requests for
> the above 4 example operating systems is performed in
> the device driver, if anywhere.
There are obviously several reasons why this needs to be in the
If we go back to the first point I discussed above, about MSCP being
more efficient if we queue several operations at once, without having to
wait for each operation to complete before queueing the next one, then
you must also do queue optimization inside the controller, since
obviously you cannot easily synconize and reorder operations inside the
device driver once you have queued them to the controller. That would
require you to withdraw that I/O request, so that you can insert another
and then reinsert the revoked one again for you to get the correct
Now, if you don't want the efficiency of being able to queue new
operations immediately, but instead only issue them once the previous
one is finished, then you can easily also do queue optimizations in the
However, this all also is related to another aspect of MSCP that I
mentioned: bad block replacement. Since the controller does this without
the involvement of the driver, you cannot from the software really say
which ordering of the I/O requests that are optimal.
Bad block replacements mean that block that you might think are adjacent
might physically be very far apart on the disk. In short, the software
haven't a clue to how the physical layout of the disk looks like, and
therefore the software can't really do correct I/O queue optimizations.
Another aspect is once again efficiency. By letting the controller do
the queue optimizations, you unburden the normal CPU from this task,
which otherwise takes quite a few CPU cycles to do.
The controller can play with this while it's doing a transfer and is
just idling anyway, so even if it is a slower CPU, this will end up
So from several points of view, this is both more efficient, and leads
to smaller and nimbler device driver, by not having to implement some
issues for efficiency because that is moved to the controller instead.
> Please confirm if my assumption is correct with regard to
> where the "intelligence" is located, i.e. in the device
> driver or the controller / host adapter. Based on the
> answer, it will then be possible to continue this
> discussion. It would be helpful to isolate where the
> decreased efficiency of using the DEC concept of MSCP
> is introduced and what specifically causes the decrease
> in efficiency. For example, on my Pentium III, I have
> noted that when I copy large files of 1 GB or larger,
> it is almost always useful to to no other disk I/O
> during the minute it takes for the copy to complete
> unless the additional disk I/O for another job is
> trivial in comparison and I can usefully overlap my
> time looking a a different screen of information.
> Whenever possible, I also arrange to have different
> disk files which will be copied back and forth on
> different physical disk drives if the files are larger
> than about 32 MB since the time to copy any file (or
> read a smaller file) is so short in any case. While
> I realize that on a large VMS system with hundreds of
> users there will be constant disk I/O, I still suggest
> that the efficiency of the device driver to controller
> interface may play a significant role in overall I/O
> throughput rates.
Well, as to your thoughts above, I think I've covered that now.
As for explanations why you're observing faster operations with the HD:
driver in RT-11, my first suspicion would be that the program don't
issue multiple reads/writes to the controller, but instead issues one
and waits for it to complete before doing the next one.
If the program indees tries to be optimal, then my next guess would be
the device driver not issuing several operations to the same controller,
leading to the same behaviour.
While I admittedly don't know enough about RT-11 to say, and obviously
don't know how your program does it, I know that in RSX, the device
driver do issue the request immediately if possible, and as such you can
have several operations outstanding in parallell. If I were to write a
naïve copy program, I might not care enough to try to get the disks
working at full speed, which would lead to the same problem you're
observing. However, I know how to write such a program in RSX so that I
really would keep the controller busy at all times.
But that would involve using asynchronous I/O.
Another thing: The MSCP driver in RSX is maybe the most complex driver
there is (in competition with the TT: driver). There is a reason for
this. MSCP is a complex controller.
One interesting thing to note is that RSX device drivers can use I/O
queue optimization. There is support in the kernel for drivers to do
this. And the MSCP driver do have the code for this, but is turned off
by default, since it's mostly useless, for the reasons above. Only if
very many packets are queued will the driver I/O queue optimization even
begin to be used, and then over just the trailing I/O requests that the
controller don't have room for.
More information about the cctalk