¿ªÔÆÌåÓý

Re: virtual memory and overlays


 

On Fri, 28 Oct 2022 at 11:51, rvjansen@... <rvjansen@...> wrote:
Just a wild question here with regard to the compiler we need to use for cREXX.

The gcc compiler ported to VM/370 suffers address space size problems (so I have heard from reliable sources) and even has led to the ¡®380¡¯ version of Hercules to be able to be compiled (so I have read). A larger language like PL/I does not have these problems however, and when I read up about its history, this seems to be due to the fact that it has a lot of ¡®phases¡¯ that could be implemented in overlays, something the linkage editor/loader could handle in the pre-virtual memory days (and following IBM¡¯s reputation for preserving compatibility, probably still can).

Yeeesss...

This has led to the anecdote that one IBM lab was toiling at phases and overlays to fit PL/I in even the smallest of S/360 machines, while the other team was putting the finishing touches on virtual memory. Should they have talked to each other, a lot of work could have been avoided.

It's conceivable, but probably a lot more subtle in the details. IBM has roughly forever pitted internal groups against each other, and there is generally a winner and one or more losers. Sometimes the losers manage to get their project repurposed into something else, and then become winners. Or if not, the people get reallocated and the cycle begins again. I doubt that's going to change.

Now gcc is a lot bigger than anything from that era, and we have problems - where it first compiled cRexx on VM/370, now it does not. My speculation is that if we went in the opposite route than the historic development went, and we packaged this compiler in overlays, we could lessen its memory footprint.

Is there anyone from that era who still knows how this is done, can tell us whether it would be possible, and can advise on what to do?

Overlays (officially "planned overlay structure") are one way of several to save space. Another is to do it yourself on the fly with LOAD/LINK/XCTL, and I believe, though I'm not 100% sure, this is what PL/I F does. (There is a summary of how this works in the PLM for these compilers.) In both cases the saving from overlay schemes is in code space, and I rather doubt that this is the main problem. Programs like GCC, and pretty much everything else these days, use a huge amount of *data* storage to keep track of the program being compiled as it progresses through the various stages.

PL/I F, like Assembler F, and probably all the other old style compilers, uses work files, which are disk datasets. Data structures representing the various apects of the program at various stages - for example the symbol dictionary - are written to and read back from these disk work files. Whether there is a single work file that's logically partitioned or, as with Assembler F, three different physical files, each containing different sets of work in progress items, main storage is typically used for not much more than buffers for the data items in the work files.

One can think of this as a kind of application-specific virtual storage, rather than the general purpose virtual storage we are all used to.

Something like GCC does indeed have a much larger executable, but it also has a very much larger use of data storage, and I think has no concept of work files at all.

Before setting out on anything like your approach I would want to understand where the main storage is going.

Tony H.

Join [email protected] to automatically receive all group messages.