Quantcast
Channel: stardot.org.uk
Viewing all articles
Browse latest Browse all 3243

32-bit acorn software: RISC iX • Re: MEMC and RISC iX

$
0
0
I think that the focus on 32K page size as a topic of debate was a bit of a red herring. The real problem with R140 was not that the pages were too big, it was that there weren't enough of them. Sure, dividing the same memory into smaller pages gives you more pages, but memory was cheap enough by then that the 4Mbyte limit was also a small number. R260, still with 32K pages but more of them, worked well.
4M was rather small for a Unix workstation, really, and I feel that the R140 was almost more of a statement of intent than a competitive product. Having the R260 arrive about a year later, running four or five times faster and having twice the memory and SCSI for the same price, even though Acorn had planned to release it at a higher price, at least brought a substantial value-for-money improvement for anyone wanting a Unix workstation from Acorn. I don't think any of the A-series machines were ever superseded in such a spectacular fashion.

If Acorn had been able to get the ARM3 ready by 1989, launching the R260 instead of the R140, they would have had a more credible competitor, with performance more in line with the DEC and Sun machines having similar pricing. Had that brought about more customer interest, it might have driven FPA development somewhat, and one can envisage (or dream about) the FPA being ready by 1990. At that point, they would have had a product that may have been merely bringing up the rear of the pack, due to the other architectural limitations, as opposed to falling off the back of the pack and consequently seeing diminishing interest from potential customers.

(I also considered what the effect on morale might have been had the R-series been more competitive, particularly amongst those doing the actual work on the product line. Bringing something to market that isn't particularly competitive can motivate engineers and developers who might feel that the successor will be better received and that more support within the organisation might be forthcoming, but I can imagine that with the FPA dragging out, no new chipset upgrades likely, and with a lot of action happening elsewhere, it might have been hard to retain Unix specialists at Acorn.)
It is however curious that 4K has remained the page size of choice in most architectures right up to the present day (with superpages being more about short-cutting the multi-level page tables needed in large memory systems than being a choice of page sizes). I'm not sure how much that is historical vs 4K genuinely being considered the all-time optimal size for all workloads.
I'm sure there is plenty of literature discussing such matters.
The aspects of the ARM/MEMC architecture more deserving debate are the fact that it's virtually mapped (performance benefit at cost of context-switch overhead), and the 'CAM' structure (as your references say, bad for Unix as you can't map the same page in two places).
With the CAM, it seems that you can't really leave entries around when switching between (unprivileged) processes, because that would obviously allow processes to accidentally see memory belonging to other processes, and there isn't an address space identifier to qualify entries, only the page protection level which distinguishes between user, supervisor and operating system modes. So, this requires unprivileged entries in the CAM to be flushed upon a context switch and repopulated for the process being resumed, either opportunistically to avoid page faults or lazily to adapt to the actual demands of the process.

Of course, there might be physical pages that are deliberately shared between processes at the same virtual address, and obviously these need not be flushed. In principle, you can have the same physical page available at different virtual addresses in different processes, but such mappings cannot exist in the CAM at the same time due to its structure, as you note. However, without any address space annotations, you wouldn't really want multiple mappings of this nature even if the CAM did support it.

Perhaps a more onerous restriction applies to cases where one might want to map a physical page to multiple virtual pages within the same process. That might sound like an esoteric need, but I can imagine situations where one might allocate a placeholder page (containing zeros, but not being a view onto /dev/zero or anything like that, or maybe having predefined contents) which then appears at various virtual addresses until such time that the page is updated. This kind of thing might be awkward with the CAM, perhaps necessitating needless allocation, rapidly using up those rather large pages.

Statistics: Posted by paulb — Fri Dec 29, 2023 9:31 pm



Viewing all articles
Browse latest Browse all 3243

Trending Articles