cctalk Digest, Vol 80, Issue 5

Adam Thornton athornton at gmail.com
Thu May 6 16:53:02 CDT 2021


From: Liam Proven <lproven at gmail.com>
> To: "Discussion: On-Topic and Off-Topic Posts" <cctalk at classiccmp.org>
> Subject: Re: Motor generator
>
> I think because for lesser minds, such as mine, [APL is] line noise.
>
> A friend of mine, a Perl guru, studied A-Plus for a while. (Morgan
> Stanley's in-house APL dialect.) He said to me that "when I came back
> to Perl, I found it irritatingly verbose..." and then was immediately
> deeply shocked at the thought.
>
> I seriously think this is why Lisp didn't go mainstream. For a certain
> type of human mind, it's wonderful and clear and expressive, but for
> most of us, it's just a step too far.
>
> Ditto Forth, ditto Postscript, etc.
>
> Plain old algebraic infix notation has thrived for half a millennium
> because it's easily assimilated and comprehended, and many arguably
> better notations just are not.
>
> The importance of being easy, as opposed to being clear, or
> unambiguous, or expressive, etc., is widely underestimated.
>
>
Yes, that.  C is a great assembly language preprocessor for a PDP-11.  The
PDP-11 is a beautiful, intelligible architecture, where things happen one
at a time in sequence.  This is easy to think about.  Unfortunately it's
got very little to do with the way that modern high-performance silicon
gets stuff done.

(Aside: it's also weird that the one-thing-at-a-time sequencing is the
thing that feels logical and intuitive to us since it is absolutely not how
our brains work.)

I would argue that Forth and Postscript are hard to understand for a
different reason than APL: APL is inherently vectorized, and requires, more
or less, that you treat matrices as single entities.  Not many people's
brains work that way.  It's hard enough to learn to treat complex numbers
as single entities.  Forth and Postscript require you to keep a really deep
stack in your brain to understand the code, and people aren't really very
good at doing that for more than three or four items (much fewer than 7 +/-
2).  Both of these are much more difficult for most people to work with and
reason about than something imperative and infix-based.

The fundamental problem is the impedance mismatch between the way most
people think (which would at the very least take a radical reframing of
curricula to change, and might not work anyway: look at the failure of the
New Math, which was indeed very elegant, taught mathematics from first
principles as set theory, and was not at all geared to the way young
children _actually learn things_) and where we can continue to squeeze
performance out of silicon.  This is really not tractable.  I think our
best hope is to make the silicon really good at generating and figuring out
graphs so it can dispatch lots of pieces of what feels like a sequential
problem in parallel and come out with the same answer as you would have
gotten doing it the naive one-step-at-a-time way.  But we've already done
that, and, yeah, it mostly works, but the abstraction is leaky and then you
get Meltdown and Spectre.

I don't have any answers other than "move to Montana, drop off the grid,
and raise dental floss."

Adam


More information about the cctech mailing list