开云体育

ctrl + shift + ? for shortcuts
© 2025 开云体育

Re: Which Hercules and which host?

 

On 03.18.2025 10:50, Zane Healy via groups.io wrote:
My preference for running something like Hercules or SIMH is a Virtual Machine running on an older x86 box with Linux installed. Log in, start up ’screen’ and then start up the emulator. I’ve used a Raspberry Pi for Multics and MVS in the past, but find Virtual Machines to be more convenient. I prefer to run OpenSUSE on the VM, but have also used Ubuntu and Red Hat.
100% agree. I always use Debian for servers and have switched to tmux
instead of screen, but yes, this is how I run all of my Hercules and
simh instances -- Debian VM on my VM server with tmux sessions for each
Hercules (MVS 3.8 / VM/370 / etc) listening on a different console port.

Question, has anyone looked at using Proxmox for this?? I’m looking to migrate my home lab off of ESXI, due to the cost.
Yes. Early this year I made the switch to Proxmox on my VM servers after
a decade of ESXi. Very happy with it.

ONE IMPORTANT TIP, though: I always set the VM CPU to "host" instead of
the default (x86-64-v2-AES) when creating new VMs. I don't have a
cluster of Proxmox servers where I need to worry about live-migrating
VMs between hosts with different CPU models, so it always seemed like a
good idea to me to just pass through the real CPU to VMs instead of
having QEMU fake a specific CPU.

It turns out the *one* VM I accidentally forgot to set the CPU to "host"
when I created it was my mainframe VM that runs all of my Hercules
instances. I was having an intermittent problem where my MVS 3.8
instance wouldn't IPL -- it just hung forever shortly after the IPL
started. No Hercules crash, no errors...just stopped making forward
progress. After a lot of troubleshooting, I eventually realized the VM
CPU was set to x86-64-v2-AES, not "host". I changed the CPU model of
the VM to "host". Sure enough, the intermittent IPL problem went away!
Hercules has been reliable again after I made that change.

-Matthew


Re: Which Hercules and which host?

 

Dave Wade wrote:

[...]
To me this is more "bloatware" with feature such as IEEE
floating point, 31 & 64 bit code, various network interfaces
and pile of other stuff I don't use.
I understand perfectly. In fact, this so called "bloatware" issue that SDL 4.x Hyperion currently suffers from with respect to older legacy operating systems (which I honestly believe is a rather unfair characterization of the issue) is what I believe ultimately compelled Jay Maynard to create Aethra in the first place. (Jay? True?)

Jay wanted a more modern version of Hercules with the all of the known 3.x bugs fixed, but without all of the code necessary to support all of the features and functionality that legacy operating systems didn't really need. Essentially, he wanted a SDL Hercules 4.x Hyperion "Light" version of Hercules designed exclusively for older legacy operating systems (i.e. a "bloatware-free" version of SDL Hercules 4.x Hyperion). (Jay? Yes?)


I would really like a smaller lighter Hercules for use with
legacy operating systems..
Have you tried Jay's Aethra? Because I believe that was one of his goals -- if not his *primary* goal -- with creating Aethra.


Your choice.

(but I know which one *I* would personally prefer!)
Yes but you aren't running 370 code on a PI.
Correct.

And I'm also not running legacy operating systems either. I'm running modern "Z" operating systems, largely because there is a rather large non-insignificant demand for such a version of Hercules.

If I was an ordinary Hercules user instead of a Hercules developer, I would probably prefer the same version of Hercules that you are preferring, but without the bugs. In other words, I would probably be running Aethra for my heavily modified personally customized version of DOS/VS (and VM/SP 5 if I could ever find a copy).

p.s. You still haven't "irked" me yet, but you seem to be heading in that direction! ;-)

--
"Fish" (David B. Trout)
Software Development Laboratories

mail: fish@...


Re: Which Hercules and which host?

 

On 19/03/2025 21:27, Fish Fish via groups.io wrote:
Dave Wade wrote:
BlameTroi wrote:
[...]
I'm wanting to rebuild my first serious shop in my home:
VM hosting DOS/VS. I see that there are Hercules Aethra
and SDL Hercules versions. Is there any reason to prefer
one over the other for running VMCE?
I don't believe for any of the 370 based OSs such as VM/370 R6
which is the base for VM/CE there is any difference. In fact
at the risk of irking Jay and Fish I would say that on a PI
one of the 3.X releases might work better as they are a little
lighter.
While it's true that the older 3.x releases of Hercules are certainly faster than the SDL 4.x Hyperion releases, they are also buggier, failing to pass more than one of SDL Hyperion's Quality Assurance tests, producing either incorrect results or outright crashing. I would personally not recommend using it for that simple reason alone.

As far as I know, Jay's Aethra is supposed to be identical to a current SDL 4.x Hyperion version of Hercules, but without many of the modern more advanced z/Architectural features that newer more modern "z" operating systems need/require, but which older legacy operating systems such as VM/370 R6 have no need for, thus making it slightly faster than SDL 4.x Hyperion. (How much faster, I have no idea. I've never bothered to try measuring it.)


So, to summarize:


* 3.x: buggy. Not personally recommended.

* Aethra: same(?) as SDL 4.x but without the
unneeded modern z/Arch features, and thus
*possibly* faster than SDL 4.x Hyperion.
(But its speed advantage(?) -- if any --
has never been measured as far as I know.)

* SDL 4.x Hyperion: the most current up to date
and recommended version of Hercules designed
for BOTH modern z/Arch operating systems AND
older legacy operating systems as well, but
slower than Hercules 3.x. (Not sure about
Aethra as I never bothered to measure it.)


So you have to ask yourself: Which would you personally prefer?

To more quickly arrive at *possibly* the wrong answer? (3.x)
Is the 370 emulation really buggy if you disable the assists?

Or to more slowly and more confidently arrive at the right answer? (SDL and probably Aethra as well)
To me this is more "bloat ware" with feature such as IEEE floating point , 31 & 64 bit code, various network interfaces and pile of other stuff I don't use.
I would really like a smaller lighter Hercules for use with legacy operating systems..

Your choice.

(but I know which one *I* would personally prefer!)
Yes but you aren't running 370 code on a PI.
Dave


Re: Which Hercules and which host?

 

Dave Wade wrote:
BlameTroi wrote:
[...]
I'm wanting to rebuild my first serious shop in my home:
VM hosting DOS/VS. I see that there are Hercules Aethra
and SDL Hercules versions. Is there any reason to prefer
one over the other for running VMCE?
I don't believe for any of the 370 based OSs such as VM/370 R6
which is the base for VM/CE there is any difference. In fact
at the risk of irking Jay and Fish I would say that on a PI
one of the 3.X releases might work better as they are a little
lighter.
While it's true that the older 3.x releases of Hercules are certainly faster than the SDL 4.x Hyperion releases, they are also buggier, failing to pass more than one of SDL Hyperion's Quality Assurance tests, producing either incorrect results or outright crashing. I would personally not recommend using it for that simple reason alone.

As far as I know, Jay's Aethra is supposed to be identical to a current SDL 4.x Hyperion version of Hercules, but without many of the modern more advanced z/Architectural features that newer more modern "z" operating systems need/require, but which older legacy operating systems such as VM/370 R6 have no need for, thus making it slightly faster than SDL 4.x Hyperion. (How much faster, I have no idea. I've never bothered to try measuring it.)


So, to summarize:


* 3.x: buggy. Not personally recommended.

* Aethra: same(?) as SDL 4.x but without the
unneeded modern z/Arch features, and thus
*possibly* faster than SDL 4.x Hyperion.
(But its speed advantage(?) -- if any --
has never been measured as far as I know.)

* SDL 4.x Hyperion: the most current up to date
and recommended version of Hercules designed
for BOTH modern z/Arch operating systems AND
older legacy operating systems as well, but
slower than Hercules 3.x. (Not sure about
Aethra as I never bothered to measure it.)


So you have to ask yourself: Which would you personally prefer?

To more quickly arrive at *possibly* the wrong answer? (3.x)

Or to more slowly and more confidently arrive at the right answer? (SDL and probably Aethra as well)

Your choice.

(but I know which one *I* would personally prefer!)

--
"Fish" (David B. Trout)
Software Development Laboratories

mail: fish@...


Re: Which Hercules and which host?

 

开云体育

I am running 6 old OS'ses on a Raspberry Pi 5 with 8G - including Multics. All over Tmux and ssh tunnels. Working great.

There is no problem running on a Mac either, Hercules Helper (git clone?) makes building either of them very, very easy on macOS and Ubuntu (like in starting it and visiting the coffee machine easy.)

On modern Pi's I run the latest versions the helper decides to check out. I agree with Dave for the older PI's.

best regards,

搁别苍é.

On 18 Mar 2025, at 19:27, Dave Wade via groups.io <dave.g4ugm@...> wrote:



On 18/03/2025 16:36, BlameTroi via groups.io wrote:

This may be a dumb question, but I couldn't find anything helpful after searching here in groups and generally on the web.

I'm wanting to rebuild my first serious shop in my home: VM hosting DOS/VS. I see that there are Hercules Aethra and SDL Hercules versions. Is there any reason to prefer one over the other for running VMCE?

I don't believe for any of the 370 based OSs such as VM/370 R6 which is the base for VM/CE there is any difference. In fact at the risk of irking Jay and Fish I would say that on a PI one of the 3.X releases might work better as they are a little lighter.

From my reading a while back, MacOS security is a pain in the butt for Hercules and some other software.


I am not a Mac guy, but I think this is only a problem when running post 370 operating systems which have network interfaces....


While I'm a Mac guy these days, I have old but Windows 11 capable Intel boxes and a few of the various Pi boards, including a 4, available. Are Pis sufficient for VM and possibly DOSVS?

I haven't run Hercules on a PI for a while. Its slow, I believe it gives a similar performance to 4331/4341/4361 class machine. Give it a try..

Thanks.

Dave
G4UGM







Re: RSCS Assistant

 

Errors were found, so I will be posting an updated set of files soon.
?
Would it be a good idea to add sequence numbers to the COPY datasets?

I created HELP for the two EXECs.

Part of my idea is to modify VMSETUP EXEC to call the RSCSACC EXEC. Comments, questions, criticisms?
?
Would it be prudent to create a "Program Product" tape with a MEMO and an INSTALL EXEC?
?
?... Mark S.


Re: Which Hercules and which host?

 

On 18/03/2025 16:36, BlameTroi via groups.io wrote:

This may be a dumb question, but I couldn't find anything helpful after searching here in groups and generally on the web.

I'm wanting to rebuild my first serious shop in my home: VM hosting DOS/VS. I see that there are Hercules Aethra and SDL Hercules versions. Is there any reason to prefer one over the other for running VMCE?
I don't believe for any of the 370 based OSs such as VM/370 R6 which is the base for VM/CE there is any difference. In fact at the risk of irking Jay and Fish I would say that on a PI one of the 3.X releases might work better as they are a little lighter.

From my reading a while back, MacOS security is a pain in the butt for Hercules and some other software.
I am not a Mac guy, but I think this is only a problem when running post 370 operating systems which have network interfaces....


While I'm a Mac guy these days, I have old but Windows 11 capable Intel boxes and a few of the various Pi boards, including a 4, available. Are Pis sufficient for VM and possibly DOSVS?
I haven't run Hercules on a PI for a while. Its slow, I believe it gives a similar performance to 4331/4341/4361 class machine. Give it a try..

Thanks.
Dave
G4UGM


Re: Which Hercules and which host?

 

开云体育

On Mar 18, 2025, at 9:36 AM, BlameTroi via groups.io <blametroi@...> wrote:

This may be a dumb question, but I couldn't find anything helpful after searching here in groups and generally on the web.

I'm wanting to rebuild my first serious shop in my home: VM hosting DOS/VS. I see that there are Hercules Aethra and SDL Hercules versions. Is there any reason to prefer one over the other for running VMCE?

From my reading a while back, MacOS security is a pain in the butt for Hercules and some other software. While I'm a Mac guy these days, I have old but Windows 11 capable Intel boxes and a few of the various Pi boards, including a 4, available. Are Pis sufficient for VM and possibly DOSVS?

Thanks.

My preference for running something like Hercules or SIMH is a Virtual Machine running on an older x86 box with Linux installed. ?Log in, start up ’screen’ and then start up the emulator. ?I’ve used a Raspberry Pi for Multics and MVS in the past, but find Virtual Machines to be more convenient. ?I prefer to run OpenSUSE on the VM, but have also used Ubuntu and Red Hat. ?

Question, has anyone looked at using Proxmox for this?? ?I’m looking to migrate my home lab off of ESXI, due to the cost.

I have had Hercules running on my MacBook Pro, but it’s definitely a pain. ?Still it’s nice to have when on vacation.

Zane



Which Hercules and which host?

 

This may be a dumb question, but I couldn't find anything helpful after searching here in groups and generally on the web.

I'm wanting to rebuild my first serious shop in my home: VM hosting DOS/VS. I see that there are Hercules Aethra and SDL Hercules versions. Is there any reason to prefer one over the other for running VMCE?

From my reading a while back, MacOS security is a pain in the butt for Hercules and some other software. While I'm a Mac guy these days, I have old but Windows 11 capable Intel boxes and a few of the various Pi boards, including a 4, available. Are Pis sufficient for VM and possibly DOSVS?

Thanks.


RSCS Assistant

 

It's taken me a while to get this done, but I have created two EXECs. One for the access order for RSCS, RSCS1, or RSCSTST userids, and a second one to assemble the code, with or without Peter's modifications, and build the RSCS module to IPL. I have also included templates for the four COPY files which users use for control of their RSCS system: AXSLINKS COPY, AXSROUTE COPY, LAXLINES COPY, and TAGQUEUE COPY.

Anyone having gone through this process, I would appreciate suggestions to improve this, as I would like to add it to the VM/370 CE distribution.

Anyone wanting to try installing, I would appreciate knowing what isn't clear, could be better or easier to use.
?
As a side note: The first EXEC can be called from VMSETUP EXEC, if you add 'EXEC RSCSACC RSCS"" under the -RSCS label. You may wish to comment out the ACCESS commands listed there.
?
?... Mark S.


Added Folder /RSCS_Assistant #file-notice

Group Notification
 

Mark A. Stevens <marXtevens@...> added folder /RSCS_Assistant

Description:
A couple of EXECs to set up the access order and build RSCS with or without Peter Coghlan's mods. Two formats a zip of an AWS tape, and a VMARC file.


File /hrc422ds.vmarc uploaded #file-notice

Group Notification
 

The following items have been added to the Files area of the [email protected] group.

By: Ross Patterson <ross.patterson@...>

Description:
HRC422DS NUCXTEXT occasionally causes DMSLIO109S VIRTUAL STORAGE CAPACITY EXCEEDED See https://github.com/s390guy/vm370/issues/131


Re: IBM Documentation Hidden At IBM

 

replace the * (asterisk or star symbol) before publibz with a forward
> slash /

That's got it. Thanks, Gonzalo.

De


Re: IBM Documentation Hidden At IBM

 

I am so sorry, I don't know why link became broken.
Here, I will try the second time
?
Best wishes,
Andre


Re: IBM Documentation Hidden At IBM

 

replace the * (asterisk or star symbol) before publibz with a forward slash /


On Fri, Mar 14, 2025 at 6:23?PM Dennis Boone via <drb=[email protected]> wrote:
?>

Is the above mangled from what you sent?? IA gives me a "No URL has been
captured for this URL prefix" error.? (I live behind a $%^*&
"protection" system, i.e. URL destroyer, sigh.)

De






Re: IBM Documentation Hidden At IBM

 

Is the above mangled from what you sent? IA gives me a "No URL has been
captured for this URL prefix" error. (I live behind a $%^*&
"protection" system, i.e. URL destroyer, sigh.)

De


Re: IBM Documentation Hidden At IBM

 

Hello,
?
Here is this folder from web archive. It contains 1754 pdf files. You can download all of them if you like.
?
?
Best wishes,
Andre


Re: IBM Documentation Hidden At IBM

 

开云体育

I googled 1fc5g101 and got this URL as first hit. So I guess there are ways to find them anyway. Looking for the Erep Users guide gives the URL as well, though that wil give ifc5g103.

Funny part is that if you go to epub/pdf/ it gives a forbidden result so you do need to specifically enter the full url of a book.

Regards, Berry.

Op 14-03-2025 om 15:05 schreef Mark A. Stevens via groups.io:

Did this by accident, and pulled up an old URL for
?
?
and it actually pulled the manual into my web browser. So if you have URLs in your history, you still might be able to retrieve the manual.

Too bad I don't have a complete dump of URLs to the documents.
?
?... Mark S.
?


IBM Documentation Hidden At IBM

 

Did this by accident, and pulled up an old URL for
?
?
and it actually pulled the manual into my web browser. So if you have URLs in your history, you still might be able to retrieve the manual.

Too bad I don't have a complete dump of URLs to the documents.
?
?... Mark S.
?


Re: Recursive VM installation?

 

As mentioned, technically there's no limit. But in practice there is. It all has to do with the overhead you get when running in virtual environments and especially when CPU instructions need to be emulated. I do work with modern z-environments, in a current z/VM virtualization overhead is about 7%, emulation is effectively about 100% overhead, more or less the same is true for any VM/ESA or VM/XA system.

You have to consider SIE processing. () Because of SIE your guest can get access to the processor directly, but when SIE is not available all CPU load needs to be emulated by the host levels. SIE can run three levels. In the past you were running a mainframe in basic mode. You could run three levels with SIE: first level (or native), second level and third level. SIE was available in the 370 XA architecture, so VM/370 doesn't include SIE, you would need VM/XA or newer for that.

Then we got LPAR mode mainframes. Starting zArchitecture LPAR mode is the only mode available. PR/SM is basically the first level of virtualization. So now you can run two SIE levels in an LPAR, the VM Host (now refered to as native or first level) and a guest under VM (now refered to as second level). Any guest running under the second level guest (a third level guest) cannot benefit from SIE processing. Remember, in LPAR mode this third level guest is actually a fourth level guest. So in practice, when running a VM LPAR with a VM guest and a VM or VSE guest under the second level VM, the host now needs to emulate all CPU instructions.

Some 15 years ago we wanted to run a customer in a second level environment. Indeed we could run their VM?(z/VM -> customer z/VM guest -> z/VSE guests) as second level guest, but their VSE systems couldn't use SIE. As a result the load was emulated and CPU usage doubled and also introduced a lot of other timing related issues. That wasn't an option for that customer, but we did run another customer as second level environment for a couple of years and they accepted the increased CPU load for their environment.

Apart from running on the real hardware, we obviously focus on Hercules. So in this case all CPU needs to be emulated, going from x86 hardware to an emulated S360-S370-S390 CPU. Next we run VM in hercules. As stated, SIE is only found in VM/XA and later so SIE is not available in VM/370 and a guest CPU needs to be emulated in any level of virtualization. But for argument sake, Hercules implements the ESA/390 architecture, including SIE, so if you run a newer VM within this system SIE is available. But only for three levels, and the S390 CPU is emulated to begin with.

But technically, you can run Hercules -> VM -> Linux -> Hercules -> Linux etc (or whatever configuration you like). The only thing is, your guest may be very slowwww as the overhead will double with every new level, especially when SIE is not available in any of the levels (VM/370 or any level beyond third level virtualization).

At least you're not the first to try this. I have seen a virtualization up to 7 levels running various configurations of hercules, VM and Linux.

Regards, Berry.

Op 11-03-2025 om 15:48 schreef Alexander Huemer via groups.io:

Hi

I am new to this group.
Let me please provide a bit of background why I am making this post.
At my first IT job in 1999 there was an IBM 9221 running VM/ESA and on
top of that VSE/ESA. My involvement with that machine was
(unfortunately) very sporadic, I had to look after other tech.
I got very basic training on the machine, enabling me to do some simple
things ('v net, act' and stuff like that). Unfortunately I forgot most
of what I knew back then over the last 25 years due to not being
involved with mainframes professionally.
While I do play with mainframe tech in my spare time a bit, I cannot
claim any in-depth knowledge.
One thing that was explained to me back then was very impressive to me
and stuck in my mind.

You can install VM on top of VM
My knee-jerk question to my instructor back then was:

How deep can you go?
He didn't know.

Ever since, I have an idea in my head that comes back occasionally.
Can you install VM 'recursively'?
What I mean by that is the following:
Can you prepare an IPL-able VM tape that does the following:
* IPL (obviously)
* without user interaction:
* Some arithmetic to assess suitable values for the next step like
available memory (storage) and available DASD space
* create the necessary infrastructure to run a VM guest
(user account, minidisk, etc.) with the pre-computed values from the
last step
* IPL the same tape that was used originally for the 'bare-metal'
installation inside the just created VM
* Configure the guest system so that it can be reached from the
outside via a 3270 session or some such

This process is supposed to run as unattended as possible, continuing
until some inherent nesting limitation of VM is reached or a required
resource like storage or DASD space is exhausted.

I am lacking the experience with VM to assess whether this is possible
at all or if perhaps it is possible in principle but only with later
versions of VM than VM/370 or something like that.

Anyways, I would be interested in the opinion of people on this mailing
list regarding this topic.
Perhaps you'll tell me this is a stupid idea, but hey ho.

-Alex