Structured Fortran - was Re: Self modifying code, lambda calculus

Jay Jaeger cube1 at charter.net
Thu Sep 24 10:00:47 CDT 2015


On 9/24/2015 12:22 AM, Eric Smith wrote:
> On Wed, Sep 23, 2015 at 8:30 AM, Jay Jaeger <cube1 at charter.net> wrote:
>> An int just has to be able to store numbers of a certain magnitude.
>> Same with long.  You do have to be able to convert between longs (and
>> possibly ints) and addresses (*).  So, you make an int 5 digits (which
>> matches the natural length of addresses) and longs something like 10
>> digits.  You don't have to simulate anything, near as I can tell.  Then
>> the length of an int is 5 and a long is 10 (instead of the more typical
>> 2 and 4).
> 
> And the length of a char?  It's required that all types other than
> bitfields be fully represented as multiple chars, not e.g. an int
> being two and a half chars, and a char has to cover at least the range
> 0..255, or -128..127, and it has to have a range based on a power of
> two.
> 
> ISO/IEC 9899:1999(E) §3.6 ¶1 - a byte has to hold any member of the
> basic character set
> ISO/IEC 9899:1999(E) §3.7.1 ¶1 - a character is a C bit representation
> that fits in a byte
> ISO/IEC 9899:1999(E) §5.2.4.2.1 ¶1 - the size of a char is CHAR_BIT
> bits, which is at least 8
> ISO/IEC 9899:1999(E) §6.2.6.1 ¶2-4 - everything other than bitfields
> consists of bytes
> 
> ISO/IEC 9899:1999(E) §6.2.6.1 ¶5 - Some data types other than char may
> have machine representations that can't use all of the possible bit
> patterns of the storage allocated; those representations are called
> "trap representations". The char (and unsigned char) types can't have
> trap representations.
> 
> ISO/IEC 9899:199(E) §6.2.6.2 ¶1 - unsigned integer types must have a
> range of 0 to (2^n)-1, for some natural number n.
> ISO/IEC 9899:199(E) §6.2.6.2 ¶2 - signed integer types must have a
> range of -(2^n) to (2^n)-1 or -((2^n)-1) to (2^n)-1.
> 
> On a decimal machine, if you use three digits for a char, you have to
> arrange that all your other types are multiples of three digits, with
> each three-digit group only using valid char representations, because
> accessing a char/byte out of a larger integer type is not allowed to
> be a trap representation, because chars can't have a trap
> representation.

I don't *have* to do any such thing.

> 
> If an unsigned char is three digits with values from 0..255, an
> unsigned int can't be five digits. It has to be six digits, and the
> only valid representations for it would have values of 0..65535. It
> can't have any valid values in the range of 65536..999999.
> 

It does not *have* to be six digits.

You seem to be supposing that I said one could/would implement ANSI/ISO
C on a 1410 in native code (as opposed to some kind of binary
threaded-code simulator that has been suggested).  I did not.  I said C,
and by that I meant something presumably contemporary with the machine
in its last years.  I would not suggest that one would implement
ANSI/ISO C on such a machine, any more than I would expect to implement
current versions of FORTRAN on such a machine.  Heck, there wasn't even
a FORTRAN IV for the 1410.

I would expect a char to be 6 or 7 bits on a 1410 - one storage
character, rather than 8 (one could conceivably use the word-mark for a
char bit to get 7 bits, and it would make some sense to do so, but if
abused (say, by accessing an int as char [5]) could result in a wordmark
in the middle of an int, which it would be good to avoid if at all
possible to avoid having to move integer types to an intermediate
storage location using record marks to terminate the move rather than
wordmarks).

An int would be 5 characters long.

If one goes back to the definition of C in "The C Programming Language",
then one sees a less restrictive specification the the contemporary
ANSI/ISO specification.  The restrictions of ANSI/ISO C came later
because of things that folks tended to assume and do in their C programs
because of the hardware it typically ran on, i.e., that chars were
capable of holding 8 bit binary numbers.

"Objects declared as characters (char) are large enough to store any
member of the implementation's character set, and if a genuine character
from that character set is stored in a character variable, its *value*
is *equivalent* to the integer code for that character.  Other values
may be stored into character variables, but the implementation is
machine-dependent."  (asterisk emphasis added).

"Equivalent" is extremely important here, as it frees one from the
notion of it having to be the exact same bit representation.  It means
that if you cast from char to int (access it as a value), or pass a char
as a formal parameter, the int gets the value of the character as a set
of bits, and vice versa.  It does NOT require that the int be the
*identical* bits as the char.

Common practice is, of course, to use chars to store small, but still
useful, integer values, in this case, -32 to +31 (6 bits) or -64 to +63
(7 bits)).  Would this break some programs that assume a char can hold
values from -127 to 127?  Of course.  Would those programs be "fixable"
to the extent that they were not dependent upon machine I/O hardware and
the like?  Yes, they should be.

""Plain" integers have the natural size suggested by the host machine
architecture".

Thus one would end up with a C char type which is only slightly
different from FORTRAN CHARACTER variables, but which can still store
small integer values in the spirit of C.

Enough already.

JRJ


More information about the cctalk mailing list