On 2025-03-13 6:01 p.m., Paul Koning via cctalk wrote:
An additional comment on that. To understand any
field well, it is generally helpful, and at times crucial, to have some knowledge of early
work. I just read a translation of some of the works of Galileo that makes that point in
so many words in the introduction. Similarly, while people don't build compilers
today the way they did in 1960, it is still valuable to read how Dijkstra and Zonneveld
wrote the first ALGOL-60 compiler and, along the way, had to invent many of the compiler
construction techniques that were later routinely taught and used in university compiler
construction courses. There's an outstanding analysis of that, in the Ph.D. thesis of
Gauthier van den Hove. (It's not yet a published document; I think that has something
to do with copyright and publication practices at some universities, perhaps Dutch or
European conventions.) He subtitled his thesis "New insights from old
programs".
Hardware really defined what one could do, and that is often glossed
over in modern texts. You had multi-pass compilers because you had
so little real memory. Later when C and Pascal came out, they had
ample memory to run,and less primitive i/o thus deemed better and
quicker. Would one have had different languages had {} and [] been
around at the time and a full 32K of core? Hard drives designed for
more memory rather than having fast RPM so you can swap virtual memory
faster. You see 7 and 9 track drives with computers till the late
80's in movies and TV, yet very few books describe hardware interfacing.
Byte addressing is needed for C in general,and that makes a big
difference in computer languages and design. Data addressing went from
records to byte streams marking a whole new style of computing.
paul
Ben.