There can also be nanocode where code drives microcode drives nanocode.
http://www.easy68k.com/paulrsm/doc/dpbm68k1.htm gives an exposition of nanocode used below
microcode in the 68000 and also examples of vertical and horizontal microcode. Paul Konig
has just posted on the V H distinction - I shall not duplicate beyond emphasising that
microcode is an overloaded term, compare : i64 usage, horizontal, vertical,
"normal" people's usage
It is arguable that code, state machines and microcode are "the same thing";
that is they are all logical sequential engines. To differentiate, I shall ofer the view
that you change code 4 times an hour, microcode 4 times a day and state machines once a
year. Another differentiator of microcode from state machines is that it is usually held
in RAM, although the old men often used PROMs for production and speed.
These days microcode works well in FPGAs, RAM access times of 3 ns without pipelining, and
as Xilinx BRAM comes in 1k Wd x 36b quanta eg 108 bits. BRAM and DSP resource permits
implementation of pretty much any (array of) mills. And the architecture can provide a
plurality of parallel memories and address generators. The sort of things which were
conveiveable but probably not implementable in the bit slice days.
Dropping down the stack to the original sequential software / parallel hardware question
Superscalar architectures represent a hybrid aproach, multiple mills potentially tailored
to the task, scope for dynamic resource resolution and the heavy lifting of scheduling
micro-operations handled by the compiler back end / code generator; see eg
https://en.wikipedia.org/wiki/Superscalar_processor#:~:text=Superscalar%20C….
Interestingly, Synopsis have a toolset, asip-designer, which implements superscalar
architectures; see eg
https://www.synopsys.com/designware-ip/processor-solutions/asips-tools.html Unless I am
mistaken, the AI engines provided in Xilinx's ACAP range use this technology. In any
case, you can get FPGAs with an array of superscalar processors; see eg
https://adaptivesupport.amd.com/s/article/1132493?language=en_US
A final point is that most of the old mens techniques remain current, Wilkes used
microcode ~1950. What changes, is the price point at which you can reduce to practice.
Martin
-----Original Message-----
From: Steve Lewis via cctalk [mailto:cctalk@classiccmp.org]
Sent: 04 May 2025 20:06
To: General Discussion: On-Topic and Off-Topic Posts <cctalk(a)classiccmp.org>
Cc: Steve Lewis <lewissa78(a)gmail.com>
Subject: [cctalk] Re: Wang TTL BASIC
The IBM5100 also uses the term "microcode" - but I'm not sure if that term
pre-1975 means the same as what, say, Intel used it for around the x86?
I've seen a glimpse into the syntax of the x86 microcode. In the IBM
5100's case, its CPU is distributed across 14 or so SLT chips - so I never
fully understood how it implements its PALM instruction set. I know the
two large IC on that process are two 64-byte memory things (dunno if categorized as SRAM
or DRAM, or neither), mapped to the first 128 bytes of system RAM (so a high speed pass
through, where that 128 bytes correspond to the registers used by each of the 4 interrupt
levels). That PALM processor was developed right around the time of the Intel 4004 (late
'71 / mid '72), and stout enough to run a version of APL about a year later (I
see Intel made a version of FORTRAN for the 8008, or at least a claim for it in the
Intertec brochures).
Anyway, all I mean is, in early 70s did "microcode" just mean instruction-set,
and that changed a few years later? Or did microcode always mean some kind of "more
primitive sequence" used to construct into an instruction set?
-Steve
On Sun, May 4, 2025 at 1:33 PM ben via cctalk <cctalk(a)classiccmp.org> wrote:
On 2025-05-04 2:11 a.m., jos via cctalk wrote:
> I recall that system had many boards, the
whole "CPU" box was
> external
to
the
monitor (and in the earliest versions, the power supply was also a
large external box). I can't really fathom creating a BASIC out of raw
TTL, or maybe I'm misunderstanding the approach.
You build a processor with some TTL, and then implement a BASIC on
that microprocessor.
There is always this intermediate step, no machine executes BASIC
directly in TTL.
Well for BASIC that is true.
The Fairchild Symbol Computer was test to just how far TTL could go.
Look here for an example of a processor
(Datapoint 2200) in TTL :
https://bitsavers.org/pdf/datapoint/2200/jdreesen_shematics/DP2200_mb.
pdf
Jos
Micocoded coded machines, could likely be programed to run basic.
Ben.