P96 v3.2.x not activated on AOS3.2.1 if Z3 RAM is 1GB

Caution: Non registered users only see threads and messages in the currently selected language, which is determined by their browser. Please create an account and log in to see all content by default. This is a limitation of the forum software.


Also users that are not logged in can not create new threads. This is a, unfortunately needed, counter measure against spam. Please create an account and log in to start new threads.

Don't Panic. Please wash hands.
  • I'd suggest introducing a "reduced comparator" bit: Many of my designs do not implement the full 8-bit comparator, but only the lower three bits, expecting the system to have a free space in $e9.xxxx to $ef.xxxx. For full compatibility, autoconfig would have to go through all cards first, find the "reduced comparator" cards and make sure to give them one of those seven precious places. Moving to such a two-pass autoconfig process would make it truly "plug and play", as opposed to the current situation where swapping cards around will make certain configs work that otherwise wouldn't.

  • Thinking of this: There would have to be a software-hook between the first and the second pass, as the required "reset" command would not reset the soft-autoconfig engine of the ACA500plus. If we get such a software-hook, it would be easy to prepare the counters in the ACA500plus for the second pass.

  • One of the 'natures of the beast' in your favor is slot order. If your ACA's are always going to be first, there's no issue with getting your preferred slots in the $E9-EF config space, or even the 8M space. The only full-homework assignment is when you are in the general Zorro slots, or somewhere downstream (in general) from the 'first PIC'. I am sure that any Z2 accelerator with AutoConfig 2/4/8MB RAM is going to plan for $0020.0000 RAM placement. The same would apply to any I/O PIC with a software ROM on it. Accelerators on the 24-bit systems can pretty much count on >16MB address space being their own for that same reason. The only nod to interest above 16MB is on the A1200 with OS 3.1+, with it's A3000/A4000-like CPU slot memory detection bonus code. Hindsight being 20/20, it would have been nice to have the OS find all memory on those accelerators - as ROM-added memory is 'late' to the game and leaves OS structures in ChipRAM if there is no AutoConfig-type FastRAM


    The post-C= era 68060 boards can get away with the $F0.0000 ROMs to fix the FPU, and insert their other I/O interfaces in that 512K 'diagnostic/cartridge' spot for that same 'I'm first' reason.


    I have thought of the 'double-pass' for AutoConfig, too, but that will also need to get some testing, and I see initial problems. I already know that the A2000 TekMagic cards need a Ctrl-A-A kick from a software reset perspective (LoadModule hits this problem). The 3.1.4+ install processes avoids the attempt to call a software reset command on 040/060 systems for this and other card's similar issues. My understanding is the reset signal hits from the CPU instruction execution, but the board doesn't react fast enough, and hold the CPU from taking off, until the board completes it's actual hardware reset.

    Former GVP Tech Support 1989-93, GuruROM Maker/Supporter (as personal time allows)

  • I don't worry about the accelerators, but about Buddha, VarIO, X-Surf (all versions). Over decades, it was safe to assume that one of the seven 64k-slots will be free, as no properly-working system has more than 5 slots.

  • Noted on the 5 slots. It's actually 6 slots (5+CPU). There are the 128K PICs out there, too. I'm on the hunt for them. I own 4 of them - the GVP Rev 3 Impact HC (3x), and the Impact SCSI+8. I suspect the painfully old SCSI+RAM 1/0, and possibly the two models of the SCSI+2/x cards, too. I'm fairly certain the Impact HC Rev 4 and Rev 5 cards became 64K-sized, and I know the HC2 I/O PIC is 64K. I know the A3001 cards were 64K for their AT interface, and DPRC was common after that, and is 64K. I have a Buddha (20th) and of course the x-Surf 100, but not the earlier ones.


    One thing that could fix those situations (for active developers) is that if the board is in an unsupported address spot (in the OS Config list), the driver software could be revised to note that the config address of the board is not supported, and that re-arrangement of the expansion slot position is needed.

    Former GVP Tech Support 1989-93, GuruROM Maker/Supporter (as personal time allows)

  • The earlier Buddhas (pre-Phoenix edition) have the full comparator and Shutup. Only with later versions I started to optimize things out of the design.


    One thing that could fix those situations (for active developers) is that if the board is in an unsupported address spot (in the OS Config list), the driver software could be revised to note that the config address of the board is not supported, and that re-arrangement of the expansion slot position is needed.

    Granted - driver software could complain, but in order for this to be a gain for the user, there would have to be an information path back to expansion lib, telling it to configure it to a different address "next time", otherwise we have the need to re-arrange the physical card order. Wouldn't it be the final goal of this whole endeavour to avoid exactly that?

  • I'm looking for a better solution to a problem that already exists. I am aware that solution may not exist.


    Right now, this problem has an ending which is messy. It's like the old cartoon where Wile E Coyote runs off the cliff, and despite the contingencies he prepared for, each fail-safe fails, and the rock or whatever is following him over lands on his head/explodes.


    The most swift and sure way to end the problem now is to never 'place' the last PIC which has no place to go, and just exit. That will have to be an expansion.library rework because it's obviously not doing it today. It currently tries to put everything somewhere until it can't find anything more to configure.


    There may be a little more that can be done, but we need more evidence. There has not been a good way to get the evidence (yet) of how each card deals with the end of space, but I've had an idea on that for awhile.


    Jeff Boyer, back in GVP days, showed me that if you stopped Autoconfig (always held your board CFG_In line high) Autoconfig will never see you, and the HW will never pass the _CFG_Out baton on to other downstream cards to also be seen/placed. The native config sequence passes all expansion boards, the OS continues on, presumably booting from what is available, and drops you at the shell. Back in the day, that was a floppy. Motherboard resources today give us a few HD boot options.


    From here, you have the ability to bring up the tool Mon, and poke a value into your board offset at $E8.0000, and force it to an address (or even force a shut). You can poke some more at it with Mon for testing, or call an executable, or load a prepared Binddrivers-loaded driver to run against your PIC if you placed it in a healthy place. It's the OS 1.1/1.2 days again, yes, but from a development perspective, the simplicity has its uses.


    Next, Ralph Babel taught me it's possible to fill the OS's board config chain, which is a linked list, with software-generated 'boards' as needed as long as the chain's is not broken, and the board record data is within spec. The I/O Extender on the G-Force board gets built this way over the same PIC space as the 64K DPRC. FWIW - AddMem activity takes the shortcut of inserting a known RAM space into the memory list, something several developers do. This is the same concept, just with PICs.


    I see a need to do both sequential (but manual) PIC placement, and a pseudo-fill of bogus cards, for this kind of full-config testing. I tried to get the motherboard cloners a few years ago to add a single jumper on the first CFG_In board slot to their replicas, but their myopia ignored my requests. This would have negated the need for the next option.


    The next step I see is we would need a development expansion.library which never processes Autoconfig boards, only motherboard-detected RAM and disk interfaces. Then, a tool which allows several options. 1) pre-insert known 'hurdles' in various machines (PCMCIA footprints being one), 2) lets you insert fake boards to fill various expansion address slots at will, then 3) allows single physical board placement via defined logic for the expansion region, and 4) allow address overrides of a PIC's destination (if desired). I'd even like to be able to test each expansion region by defining the placement order top-down, bottom-up, and even a hybrid 'split' in the middle placement in each direction (kind of like z3 population became). The tool could do the logic work to test the PIC placement engine logic.


    Once we have this kind of tool, better decisions on what to do can be made.


    Another possibility in all of this is we have 2 unused 'bits' in the expansion spec card types - At Reg 00, the settings for 00 and 01 are Reserved (p432 HRM 3rd Ed.), with 10 = Z3, and 11=Z2 already defined. It's possible that we could define an additional card type which can identify support for placement into the latest expansion spaces. That would allow currently active (and new) developers with the ability to update their hard or soft logic more avenues to explore.


    One other thing I see a need for is a longer 'delay' before expansion runs against the PIC placement. This would allow cards like ZZ9900 a longer chance to soft-init from flash media. I think he has been chasing a problem where he can't completely software-define (load) his image from flash at power-on before Autoconfig is knocking on his door. His network card function is sometimes unavailable at cold boot. He's fine with warm-boots as he decouples what happens after _Reset is triggers. This would help other developers with on-card soft logic loads at boot time.

    Former GVP Tech Support 1989-93, GuruROM Maker/Supporter (as personal time allows)

    Edited 2 times, last by thebajaguy ().

  • The most swift and sure way to end the problem now is to never 'place' the last PIC which has no place to go, and just exit.

    This would actually add $e8.0000 to the available space, at least for all of my cards. Reason is that the complete memory map of my designs always appears on the whole 64k space, and IO/ROM is also visible - in addition to the config nibbles, which also need to be seen at the target location, otherwise ROMs can't be executed.


    So just "not configuring" will leave it in $e8.0000, and the driver may see it there if an AddConfigDev() is executed anyway. Not exactly a clean solution, as a write to $e8.0048 will break it, but the MMU granularity of the 68030 is small enough to keep software from doing that (68040/060 has larger pages, so it won't work on all my cards).


    From here, you have the ability to bring up the tool Mon,

    Yes, "Megamon" is part of my "survival kit" when developing :-)


    I tried to get the motherboard cloners a few years ago to add a single jumper on the first CFG_In board slot to their replicas, but their myopia ignored my requests.

    To be honest, I would have ignored that as well, as it's a very special application that can be achieved with a one-off modification to a daughter board. I have done a similar mod to an A4000D daughterboard for easier production testing and assigning of the MAC address of the X-Surf-100.


    Further, a modern motherboard shouldn't have jumpers where a soft-config could do the same. Imagine the pethora of jumpers that an ACA500plus would have for all the menu options!

    The next step I see is we would need a development expansion.library which never processes Autoconfig boards,

    That sounds like a patched Kickstart, which is not too difficult at this point (thanks ro ROMulus and REMus). A mixed approach to your debugging wishes would be to patch expansion to only configure a single board and then stop. That single board could be a development engine, which has it's own ROM that will continue the autoconfig process as you described. The card could be moved into different target/test computers, so the modification would be only in parts that require no soldering.

    Another possibility in all of this is we have 2 unused 'bits' in the expansion spec card types - At Reg 00, the settings for 00 and 01 are Reserved (p432 HRM 3rd Ed.), with 10 = Z3, and 11=Z2 already defined. It's possible that we could define an additional card type which can identify support for placement into the latest expansion spaces. That would allow currently active (and new) developers with the ability to update their hard or soft logic more avenues to explore.

    I'd vote against this, as current and new developers would not like to develop "only for the latest Kickstart" - especially not when more and more developers are adding Kick 1.3 compatibility to their skill set. Yes, an extension to the existing Autoconfig nibbles may be needed, but fiddling with already-defined bits will mean that a new PIC won't work on an older Kickstart. My vote would therefore go to making new definitions to "reserved" bits that provide additional information to a new expansion library, without confusing older versions of existing expansion libraries.


    I see a need for is a longer 'delay' before expansion runs against the PIC placement. This would allow cards like ZZ9900 a longer chance to soft-init from flash media.

    No. There is an easy way to delay an access (OVR and giving your own DTACK), and that allows for infinitely-long delays. The first DTACK from a $e8.0000 read can be delayed until the PIC is fully up and running. Logic for that is minimal (=very cost-effective) and trivial. A solution to this problem is already in the architecture. Providing yet another solution will break compatibility with older systems, which is not what current developers want.

  • This would actually add $e8.0000 to the available space, at least for all of my cards. Reason is that the complete memory map of my designs always appears on the whole 64k space, and IO/ROM is also visible - in addition to the config nibbles, which also need to be seen at the target location, otherwise ROMs can't be executed.


    So just "not configuring" will leave it in $e8.0000, and the driver may see it there if an AddConfigDev() is executed anyway. Not exactly a clean solution, as a write to $e8.0048 will break it, but the MMU granularity of the 68030 is small enough to keep software from doing that (68040/060 has larger pages, so it won't work on all my cards).

    I actually asked Thor about this as a solution back in 2018, and there are 2 issues. 1) Writing the board config address to $E8.0000 will trigger the next card to appear at $E8.0000, so that can never be done. 2) The abandon effort at $E8.0000 could not actually insert the card into the OS config chain linked list with a valid entry - which is where drivers should always look. The reason it cannot be used there is because only 64K of the card is allowed to be 'visible' when seen in $E8.0000, and any 128K card (or larger) PIC cannot fully show themselves to be any larger than 64K at config time. I know this because one of the early GVP HC Rev 3 cards' PAL packages (2 PALs) which are sized as 128K PICs had to be revised to do just that. If the unconfigured PIC at $E8.0000 responds over any of the other already programmed cards' address space at $E9.0000+, you have a collision situation. Larger shared memory boards also have to behave properly at config time.

    To be honest, I would have ignored that as well, as it's a very special application that can be achieved with a one-off modification to a daughter board. I have done a similar mod to an A4000D daughterboard for easier production testing and assigning of the MAC address of the X-Surf-100.


    Further, a modern motherboard shouldn't have jumpers where a soft-config could do the same. Imagine the pethora of jumpers that an ACA500plus would have for all the menu options!

    This was to be a closed jumper (default) that could be opened by a developer/tester, and pins added/jumper placed over it if they intended to do board development and/or troubleshooting work. Between the prototype run and final run, the effort would have been trivial. Some added other community wish-list items that took them more signal-routing effort.


    All GVP boards have a jumper like this on them, but not all other developers do something like this. The jumper would have to be placed next to the CPU slot and/or on the riser where the signal enters the first slot. The wiring to the pin is default grounded, so the first PIC is always wired 'on', and it's the only way to hold off some CPU slot cards from Auto-configuring.

    I agree, modern-era motherboards would cover this in it's chipset/BIOS Setup. I was just setting up a Dell workstation supportive of the XP-Win8 era - I need the parallel port on a DOS-capable host for an even older chip programmer of late 1980's vintage (for Mach130s). It has PCI-slot disable, port-address selection, and also similar motherboard resource toggle options.


    That sounds like a patched Kickstart, which is not too difficult at this point (thanks ro ROMulus and REMus). A mixed approach to your debugging wishes would be to patch expansion to only configure a single board and then stop. That single board could be a development engine, which has it's own ROM that will continue the autoconfig process as you described. The card could be moved into different target/test computers, so the modification would be only in parts that require no soldering.


    I have a slightly larger purpose in mind (if you are able to review the beta bug / enhancement tracker). There are a few other things a reduced-function expansion.library would provide HW/driver developers and even the OS beta team. It would make testing of hardware (mobo and PIC) hardware and drivers a lot easier without bringing up the entire expansion system, or having to have a new ROM built/burned each time. One day we will run out of erasable 27C400 EPROMs. I have yet to reliably program the modern EEPROMs that supposedly mimic them, but there is some laziness on my part to get to the bottom of that, too.


    A dev expansion PIC that blocks/manages the CFG_In/Out signal is an interesting idea. It would certainly help Z2/Z3 expansion slot developers - maybe if it sat in the CPU slot or the first expansion slot. I guess it could hold some extension Flash ROM, and that could provide step control that manually finishes the config process. A software tool that does similar would be a useful companion to that, too.

    My vote would therefore go to making new definitions to "reserved" bits that provide additional information to a new expansion library, without confusing older versions of existing expansion libraries.

    Definitely a better option.


    No. There is an easy way to delay an access (OVR and giving your own DTACK), and that allows for infinitely-long delays. The first DTACK from a $e8.0000 read can be delayed until the PIC is fully up and running. Logic for that is minimal (=very cost-effective) and trivial. A solution to this problem is already in the architecture. Providing yet another solution will break compatibility with older systems, which is not what current developers want.

    That is one answer for new PICs. At the same time, I think there are still power-on race conditions affecting the current batch of boards in the field with the new/faster CPUs. We can't go back to fix what is out there.


    Combined with very fast CPUs, the older/slower LS-class TTL device propagation times are getting pushed. A 7MHz 68K had no issues with the speeds. 50MHz 68030's were still reasonably safe. The fast 68040's and 68060's with large instruction caches and fast memory have created hardware race conditions.


    I've never seen a developer spec for when hardware expansion PICs must become ready when their CFG_In is signaled defined in the HRM documentation - because it wasn't needed back then. Now the OS has to chill for a respectable period of milliseconds at the start of expansion config, and between each PIC, to allow the elder chip gates to transition.


    In theory: A PIC is programmed by the OS in the CPU slot (it better be ready), drops CFG_Out, the 74LS32 mobo logic across the slot(s) has to transition (worst case - CPU slot out, to each slot-crossing gate, into slot #5) and the PIC has to become ready on the bus as _CFG_In signal triggers it. Board makers may have used LS gate parts and 15-25ns speed PALs at the time. The OS has to wait reasonably for the slowest hardware.


    My influences back in 3.1.4 resulted in some delays added between board read and write processes to the configuration sequence. They helped the symptoms seen a bit (the A2091 oddly was one that improved). I am just not privy to the actual OS code that is doing that work. I think there are a few more issues to stomp. I'm mainly suggesting the OS wait a defined time between expansion.library init and starting the hardware-bang process, and also for between each board. The PIC hardware signal controls you suggest can be compatible with that without detrimental boot delays.

    Former GVP Tech Support 1989-93, GuruROM Maker/Supporter (as personal time allows)

  • I have a slightly larger purpose in mind (if you are able to review the beta bug / enhancement tracker). There are a few other things a reduced-function expansion.library would provide HW/driver developers and even the OS beta team. It would make testing of hardware (mobo and PIC) hardware and drivers a lot easier without bringing up the entire expansion system,

    I don't have access to the Hyperion servers and I don't want to be part of that.


    The "test card" that I have in mind would of course be software-configurable, similar to "software defined autoconfig" that I have implemented on the ACA500plus. This way, you can generate any size PIC and play around with nibble combinations without having to (re)move hardware from the test system at all.


    or having to have a new ROM built/burned each time. One day we will run out of erasable 27C400 EPROMs.

    Use an ACAxxxx accelerator: It lets you load a Kickstart into RAM and overlay it in the original space. When iComp introduced this feature with the ACA1230, it has triggered a fruitful wave of Kickstart-related developments that surely contributed to today's broad ability to make new Kickstart images that "just work".

    A dev expansion PIC that blocks/manages the CFG_In/Out signal is an interesting idea. It would certainly help Z2/Z3 expansion slot developers - maybe if it sat in the CPU slot or the first expansion slot.

    Like the ACA2000, which is planned to have full control over the config chain that comes after?


    I think there are still power-on race conditions affecting the current batch of boards in the field with the new/faster CPUs. We can't go back to fix what is out there.

    Accelerators should not jump-start the computer - that's been common knowledge since the 1990s, when harddrives needed lots of time to spin up. If some accelerators don't have the kind of startup-slow-down that a good designs should have for.. about 30 years now, then yes, we can't fix everything. What I meant was that a new board like the ZZ9900 should implement such a bus-level delay, as that would make it instantly compatible with any-speed-accelerator and any expansion library, no matter what speed and no matter what delay it assumes will be necessary.


    So the idea would be to solve problems where they occur (at the source), not in random other places. If you deviate from that, you'll end up with a Windows-like system that attempts to run on just about any hardware config (reasonable or not), with all the problems, friction losses and bad feelings that this non-role-model comes with. After all, the problem of FPGAs or even SoCs needing more startup time is a pretty modern one, and I believe it can't be any worse than a harddrive spinning up before it answers a "drive identify" command.


    What was bad design back in the days doesn't need any fixing today. There are components/products that don't deserve preservation. I think you're going to make the result a lot better if you make a clear difference between a dead-end-fork of peripheral development and products worth preserving, worth to be taken into the 21st century.

    A 7MHz 68K had no issues with the speeds. 50MHz 68030's were still reasonably safe. The fast 68040's and 68060's with large instruction caches and fast memory have created hardware race conditions.

    Since the faster accelerators have to access the slow bus anyway, they will be "hit" by the bus-level delay just like their slower cousins. Further, if short succession of 7MHz accesses is the problem and PICs assume that execution sime between accesses is longer, then a simple iCache-disable during Autoconfig will do the trick.

    I've never seen a developer spec for when hardware expansion PICs must become ready when their CFG_In is signaled defined in the HRM documentation - because it wasn't needed back then.

    Correct - but for a different reason than the one you're mentioning: The hardware was hardwired, or at least programmed in a PAL/GAL/CPLD that has zero startup-delay. Pretty much all older cards come up right away and don't care about the speed that they are accessed, as the length of the access itsef is defined by the bus frequency. Access pauses have already been very short on the 68000: If you do a move.l on a Zorro address, then the pause between the two word-accesses is one cycle. Can't get any shorter than that.


    The reason why you need to chill a bit for combinations of fast accelerators and some more modern cards is that FPGAs and SoCs have considerable startup time compared to the "hardwired" cards of the late 1980s, early 1990s.

    In theory: A PIC is programmed by the OS in the CPU slot (it better be ready), drops CFG_Out, the 74LS32 mobo logic across the slot(s) has to transition (worst case - CPU slot out, to each slot-crossing gate, into slot #5) and the PIC has to become ready on the bus as _CFG_In signal triggers it. Board makers may have used LS gate parts and 15-25ns speed PALs at the time. The OS has to wait reasonably for the slowest hardware.

    So this means LS32-delay times: TI states 10ns typical, 14ns worst-case, so 70ns for propagation delay and maybe 40ns for comparator delay on the local card.


    The total time of 110ns starts at the end of the AS cycle that configures the card. The 7MHz 68000 bus has a cycle time of 140ns. That means that even if you do a read at the earliest-possible time after configuring, you have 30ns slack for the comparator (worst case! typical would be 50ns slack), and the response will come with perfect timing. I doubt that this is the source of the problem.

    My influences back in 3.1.4 resulted in some delays added between board read and write processes to the configuration sequence. They helped the symptoms seen a bit (the A2091 oddly was one that improved).

    Interesting. The A2091 makes two configure-runs, and that's all within the custom DMAC. Maybe that requires an additional cycle? You'll easily gain that if you run the autoconfig process without caches.

  • I'd suggest introducing a "reduced comparator" bit: Many of my designs do not implement the full 8-bit comparator, but only the lower three bits, expecting the system to have a free space in $e9.xxxx to $ef.xxxx. For full compatibility, autoconfig would have to go through all cards first, find the "reduced comparator" cards and make sure to give them one of those seven precious places. Moving to such a two-pass autoconfig process would make it truly "plug and play", as opposed to the current situation where swapping cards around will make certain configs work that otherwise wouldn't.

    Two-pass AutoConfig can't work without additional reboot (by which I mean to store gathered parameters somewhere in Chip-/RangerMem between the passes) as Expansion's progression through the individual cards happens by writing the config byte of each card. Otherwise, the cards stick to 0xE80000 and FF00..., respectively and won't pull /CFGOUT to enable the next card. Frankly, I'm not too enthusiastic about two-pass config.


    I'm pretty positive that you know what happens next when the seven slots in 0xE80000 space are exhausted. Especially since you can equip every Amiga with 030(+) CPU boards these days, those 8 MB Expansion space at 0x200000 offer plenty of room for additional I/O cards. This allocation policy remains in effect since the C= days and is highly likely to stay (IMHO) given the intricacies of existing hardware.


    An expansion.library build for testing purposes that skips Z2/Z3 is certainly possible. Actually, I generated one for my own tests a while back (that resided in the 0xF0 Flash of an 060 board).

  • Two-pass AutoConfig can't work without additional reboot

    I'm well aware of that. However, it's not a "full reboot", but just executing the "reset" instruction, which will only pull the reset line low for a few cycles, then execute the next command after that. We're using that very technique in the ACA500plus menu system, as it aims to be compatible with many different accelerators that do their own (set of) autoconfig passes - not always visible to the host system, which is why there is a programmable counter that needs to run out before the software-defined Autoconfig kicks in.


    I understand from what thebajaguy is writing that some 040/060 cards do not play nice with the reset command - can't help that, of course. However, the main takeaway from this discussion for me is that I should add a feature like "delay a few cycles after $e8.0048 write" to the ACA2000 nice-to-have-list in order to minimize possible problems with fast A1200 accelerators that will be connected to the ACA2000.

  • I'm pretty positive that you know what happens next when the seven slots in 0xE80000 space are exhausted. Especially since you can equip every Amiga with 030(+) CPU boards these days, those 8 MB Expansion space at 0x200000 offer plenty of room for additional I/O cards. This allocation policy remains in effect since the C= days and is highly likely to stay (IMHO) given the intricacies of existing hardware.

    I have several boards that behave erratically when they end up in the 8M space. The Ameristar Ethernet AE2000-100 hangs the AutoConfig process with no driver interaction. The A2090A hangs Autoconfig with it's driver ROMS active. The GVP Impact HC2 tosses driver requestors for that errant board when the driver tries to communicate with outside the normal I/O space. I assume the HC2 HW doesn't decode something properly with it's 8M programmed address. So far the other Impact cards I have don't have an issue, nor does anything with a DPRC, but I don't have every variation. I still have more 3rd party boards to put through the 'do they operate with their software' type test once they land as overflow in 8M. I expect more to make the list once the driver software is talking to them. I still think this 8M space landing point is a messy solution from way back. Halting (with no config chain entry) when it is full I think is still better if Shut is not an option, and we can't find a better solution.

    One thing I want to try (and it would require that special expansion.library / tool to test) is to test the behavior of boards placed in the A0.0000-B7.FFFF space. I know there are possible pitfalls there, but collecting the info could be useful. I also want to see if it's possible to 'graveyard' the smaller I/O cards somewhere in that space (or maybe elsewhere) as a possible way to shut them up when they technically don't support Shut. All of my random tests had them landing in the low $20.0000 range, or just above a 4MB RAM PIC at $60.0000, but they might decode enough to reach that area.


    I understand from what thebajaguy is writing that some 040/060 cards do not play nice with the reset command - can't help that, of course. However, the main takeaway from this discussion for me is that I should add a feature like "delay a few cycles after $e8.0048 write" to the ACA2000 nice-to-have-list in order to minimize possible problems with fast A1200 accelerators that will be connected to the ACA2000.

    This would probably be a safe thing to implement (delay cycles in some compatible form). I think the solution the OS uses currently in some places is to hit select safe places in Chip registers with useless read or write activity to gain a time delay to slow access to that hardware.


    Also, your earlier comment on my reply about turning off (or inhibiting) the Instruction cache is a good idea. I think it was mentioned back in 3.1.4 testing days that Inst Cache is enabled by the OS fairly early in the Exec startup code for all CPUs. My argument to leave it off until after Expansion is done with it's initialization was not strong enough at the time.


    Also, on the comment about slow HD's, I spent my entire time at GVP in the SCSI disk trenches. That pain is still there 30+ years later when I use 'other' controllers, or what was even used prior to gvpscsi.device v3.12. We had those exact 'slow drive' issues in the earlier 1.0, 2.x, and 3.7 Impact drivers with the bus scan timeout length. Between Ralph and I, the concept of boot from first available disk with a bootable RDB came about. The same goes for the interface tool for bus rescan of the pokey devices, and other bus communications adjustment options that are in the later 4.x and GuruROM. There are viable solutions to things if a few minds put the effort in.


    I do have an ACA 1221lc that I plan to make use of on an ACA2000 when it comes out. I see the potential uses for the ACA1221lc in the A2000 for the reasons you note. I have an A1200, but the JAWS-II 68030/882/50 won out there for it's reasonably good performance. I also want to keep that system mostly static.

    Former GVP Tech Support 1989-93, GuruROM Maker/Supporter (as personal time allows)

  • I have several boards that behave erratically when they end up in the 8M space.

    Many IO cards will do, especially if you have an accelerator. Trouble is that the "usual" scenario is that there's memory or a GFX card's memory block located in the 8M Z2 space. This implies that it's OK to have caches switched on for this area - and that's what most, if not all accelerators do. On the other hand, the $e8-$ef space is usually exempt from being cached, so IO cards will work reliably. I can only think of a fix with the MMU, as dividing up 8M of space in 64k segments for possible cache-disable means 128 cache enable/disable bits. That's nothing you'd implement in a CPLD.


    One thing I want to try (and it would require that special expansion.library / tool to test) is to test the behavior of boards placed in the A0.0000-B7.FFFF space. I know there are possible pitfalls there, but collecting the info could be useful.

    The pitfall here is that CIA chips are mirrored in that space. Note that the original address decoding from the A1000 only divides into 2M spaces, and the whole space from $a0.0000 to $bf.ffff is decoded to be E-Clock synchronous. To my knowledge, this same memory map has been implemented when Commodore made the transition to Gary-based computers (A500 and A2000), so this is unlikely to work.


    What I have done in ACA500plus and ACA1221lc is only possible because I have full control over the memory map by simply keeping the access within my card: I can put memory and other cards there, because I can choose to hide the access completely from the host system. However, you *must* tell the host system if you want to access a Zorro card there, and with the E-Clock decoding, you can pull /Slave all you want - it just won't work.

    I also want to see if it's possible to 'graveyard' the smaller I/O cards somewhere in that space

    One reason to not support Shutup_Forever is to save logic space. If you go with the original Commodore example, it's only a single additional RS flipflop in a PAL, but if you do it all in a CPLD, you save quite a bit if you reduce to 4 data pins only (taking advantage of the two-stage write to $e8.0048), only store three bits of the address and reduce the comparator to 3 bits only. You can even skip address lines to reduce pin count. So be careful attempting to put a card to $a0.0000 to "bury" it: It might believe you have configured it to $e8.0000. For stunts like this, you'll need a comprehensive list of autoconfig IDs and the capabilities of the cards. How big do you want to make expansion library?

    My argument to leave it off until after Expansion is done with it's initialization was not strong enough at the time.

    Glad to help - Cache has been an easy way to slow down the startup-process since the first Apollo 1230 accelerator. This one uses the config_done signal, which is in the same MACH chip, to disable all caches (data and instruction) until the config-write has been done. That was enough to let most physical harddrives spin up. Now that I have shared this idea, I expect it to pup up in open-source "developments" without giving credit, as usual...


    Jens

  • Many IO cards will do, especially if you have an accelerator. Trouble is that the "usual" scenario is that there's memory or a GFX card's memory block located in the 8M Z2 space. This implies that it's OK to have caches switched on for this area - and that's what most, if not all accelerators do. On the other hand, the $e8-$ef space is usually exempt from being cached, so IO cards will work reliably. I can only think of a fix with the MMU, as dividing up 8M of space in 64k segments for possible cache-disable means 128 cache enable/disable bits. That's nothing you'd implement in a CPLD.

    I'm familiar with the data cache issue and I/O behavior. The problem is, these boards are hanging config (or their driver) before the data cache comes into play (i.e. - prior to SetPatch/library load).


    The MuLibs config file has a 'fix' for MMU-capable CPUs with it's configuration file, if it were in play. If it were the cache, you are correct, it's messy in terms of the MMU table to cover 8M space, but still possible. The problem is, some boards don't seem to like to go down that far, address-wise.


    Actually, this is getting out there in the weeds in an edge case, but because you have to land larger boards on certain address boundaries, mapping nocache on 256K would probably work, as you could stack multiple small 64K/128K I/O cards before a hypothetical 512K 'RAM' card comes along.


    I'd still like to test the graveyard theory while we have awhile to assess the situation. A manual config tool that would make it easier to 'place' a set of boards after boot, and to test different config algorithms. No, I'd not suggest using expansion.library to hold a list of offender ID's to handle differently. That's wasted ROM space. The thing is, we are blind to what exists in the field. Nobody had sharp enough tools back then to fully test what was designed. Things are a little different today.


    One thing that would help in this is an AutoConfig test board with some logic on it that just has 3-4 I/O boards in it to arbitrarily fill the possible expansion spaces up. A jumper block to pick the various sizes and places. Maybe later this winter or spring I'll crank up KiCAD and try one, run a batch of 10 prototypes with a few buffers and a CPLD on it. Can't hurt to learn by making some mistakes.


    In the end, there has to be a 'solution' decided upon, be it 'stop dead' (ending AutoConfig), try to land one in a graveyard and stop, or maybe we can get away with something in a reasonable number of cases. The AutoConfig standard never technically defied an 'end' to the logic, but it may be time to decide and document it. We can also take the argument (you offered it earlier) that some boards may be orphaned by the solution chosen, and they should be avoided in the worst-case config scenarios. That can take up space in a ReadME that few will read until someone utters 'RTFM'. The old ROM with the old solution can still be used, too, if they need it.


    I'm still planning to work on AutoConfig behavior if the devs are willing to build some software test variants. The Inst cache-disable is one of them I will try to bring up again. I think the manual-config tool would also be a useful piece to include in the HW development kit. Halt the config process electrically, and then let the tool control things in single-step after the CLI has started. It might even be possible to start ROMs (and watch them with development tools) to catch missed nasty behavior.


    Now that I have shared this idea, I expect it to pup up in open-source "developments" without giving credit, as usual...


    From the first days of working Tech Support at GVP, I learned many rules of the world. Among them: 1) The beatings will continue until morale improves, and 2) No good deed goes unpunished. 30 years removed from the support trenches, after watching many devs of the 1980's and 90's (OEM/3rd parties) learn the hard way, I passed some good advice along to the 'new' crew of devs a few years back. I got completely flamed for suggesting they avoid stepping in the mistakes of those before them. After a few rounds of cutting their teeth from customer angst on early designs, low and behold, their close cadre eventually convinced them to do it that better way.


    Rob

    Former GVP Tech Support 1989-93, GuruROM Maker/Supporter (as personal time allows)

  • The last reply was more than 365 days ago, this thread is most likely obsolete. It is recommended to create a new thread instead.