The value of assembler language programmers [was RE: Algol vs Fortran was RE: VHDL vs Verilog]
roger.holmes at microspot.co.uk
Wed Feb 10 14:15:34 CST 2010
>>> *Every* generation of programmers has *always* looked down on their successors
>>> as using tools that waste too much computer time to do too little. Of course,
>>> *my* generation (started programming in 1969 on a 1401) is right. ;-)
>> I certainly agree with you in principle, but I still wonder why even
>> non-GUI application bloat has to be as bad as it is. We used to put
>> 25-40 users on an 8MB VAX before it would start to swap. Now, still
>> using a character based interface (which happens to be ssh vs direct
>> serial connect, but that doesn't affect CLI application size), a
>> program to tell me what processes are active on the system is 8MB by
>> itself, vs a few dozen K bytes (I should go back and dig out one of
>> those programs from the old days and port it to a modern machine to
>> compare library bloat vs application bloat. Fortunately, I have my
>> backups from 25 years ago).
> ICBW, but I think a lot of the bloat is caused by the layer upon layer
> upon layer upon layer of application interface code. My theory is that
> all those layers arose because of inadequate or incompetent design in
> the first place. Then too, I think we have a lot of "features" that are
> rarely used and that we would be better off without, to say nothing of
> all the changes for no apparent reason other than to just be different.
> I also suspect that some of those spurious features are the root cause
> of a lot of the security holes. Too, your "non-GUI" application is
> probably actually running inside a system GUI which only emulates the
> non-GUI user interface you think you're using. :-)
I too started programming in 1969 but on a 7094, though as a schoolboy sending off cards one week to get results the next week. Yes of course we're right :-)
I agree there are far too many levels of interface code, many with bugs in them which sometime get corrected in a different level. Take text on the Mac, there was a simple technology for drawing text in QuickDraw on Lisa, it did proportional fonts, different size text, handle descent, ascent and leading. On Mac they added a Text Editing manager. Then they had to allow for internationalized for non left to right languages, then they added kerning etc then they tried to replace the Text Editing manager with the Multi-Lingual Text Editing manager, then along came Unicode and we got ATSUI (The Apple Type Services for Unicode Imaging) and so it went on, all the old levels are still available, though deprecated and unavailable to 64 bit Apps. Trying to get a Carbon application to get Unicode text to appear at the right size on a non 72dpi screen AND print properly is somewhat of a nightmare.
I would like to discuss Moore's law and how it seems to have broken down in recent years. Processor speeds are still increasing but not at the expected rate, but I wonder if the real problem is RAM speed, which does not seemed to have kept up, and no longer seems to be quoted when you buy a computer, or at least a Mac. Of course on chip and level 2 cache has made tight loops of small pieces of data acceptably fast but real world programs don't do that. Think about rotating a 12 mega pixel image for instance, yes the code is a tight loop but the data isn't. Think about rendering a 3D scene with many textured objects with accurate shadows and per pixel shading and anti-aliasing ready for printing on a A0 (about 34 inch by 44 inch) printer and complex enough not to fit the capabilities of the graphics processor so it has to be done in the main processor cores. The data being processed is far too big to fit in the caches and the output pixel maps are too, though I admit it only processes one pixel at a time. Oh and while you are at it, think about error diffusing the output.
I know on the PowerPC the rotation or a one bit per pixel actually ran quicker if I turned the cache off because for every bit I fetched it loaded the cache with four words of data. Does something similar happen on Intel?
By the way I spent three hours this morning showing a BBC regional news crew my 1962 mainframe, apparently it will be condensed down to three minutes. I don't have a transmission date, it didn't go out today and probably will only shown in the south east area of England but should be on the BBC web site. Oh and a couple of weeks ago I posted an old video of it on U-Tube if anyone is interested the URL is http://www.youtube.com/watch?v=VsBPuUJPvKg or just Google ICT 1301 and select video. I hope to post a better one later in the year.
More information about the cctalk