Posts by thebajaguy

Caution: Non registered users only see threads and messages in the currently selected language, which is determined by their browser. Please create an account and log in to see all content by default. This is a limitation of the forum software.


Also users that are not logged in can not create new threads. This is a, unfortunately needed, counter measure against spam. Please create an account and log in to start new threads.

Don't Panic. Please wash hands.

    Many IO cards will do, especially if you have an accelerator. Trouble is that the "usual" scenario is that there's memory or a GFX card's memory block located in the 8M Z2 space. This implies that it's OK to have caches switched on for this area - and that's what most, if not all accelerators do. On the other hand, the $e8-$ef space is usually exempt from being cached, so IO cards will work reliably. I can only think of a fix with the MMU, as dividing up 8M of space in 64k segments for possible cache-disable means 128 cache enable/disable bits. That's nothing you'd implement in a CPLD.

    I'm familiar with the data cache issue and I/O behavior. The problem is, these boards are hanging config (or their driver) before the data cache comes into play (i.e. - prior to SetPatch/library load).


    The MuLibs config file has a 'fix' for MMU-capable CPUs with it's configuration file, if it were in play. If it were the cache, you are correct, it's messy in terms of the MMU table to cover 8M space, but still possible. The problem is, some boards don't seem to like to go down that far, address-wise.


    Actually, this is getting out there in the weeds in an edge case, but because you have to land larger boards on certain address boundaries, mapping nocache on 256K would probably work, as you could stack multiple small 64K/128K I/O cards before a hypothetical 512K 'RAM' card comes along.


    I'd still like to test the graveyard theory while we have awhile to assess the situation. A manual config tool that would make it easier to 'place' a set of boards after boot, and to test different config algorithms. No, I'd not suggest using expansion.library to hold a list of offender ID's to handle differently. That's wasted ROM space. The thing is, we are blind to what exists in the field. Nobody had sharp enough tools back then to fully test what was designed. Things are a little different today.


    One thing that would help in this is an AutoConfig test board with some logic on it that just has 3-4 I/O boards in it to arbitrarily fill the possible expansion spaces up. A jumper block to pick the various sizes and places. Maybe later this winter or spring I'll crank up KiCAD and try one, run a batch of 10 prototypes with a few buffers and a CPLD on it. Can't hurt to learn by making some mistakes.


    In the end, there has to be a 'solution' decided upon, be it 'stop dead' (ending AutoConfig), try to land one in a graveyard and stop, or maybe we can get away with something in a reasonable number of cases. The AutoConfig standard never technically defied an 'end' to the logic, but it may be time to decide and document it. We can also take the argument (you offered it earlier) that some boards may be orphaned by the solution chosen, and they should be avoided in the worst-case config scenarios. That can take up space in a ReadME that few will read until someone utters 'RTFM'. The old ROM with the old solution can still be used, too, if they need it.


    I'm still planning to work on AutoConfig behavior if the devs are willing to build some software test variants. The Inst cache-disable is one of them I will try to bring up again. I think the manual-config tool would also be a useful piece to include in the HW development kit. Halt the config process electrically, and then let the tool control things in single-step after the CLI has started. It might even be possible to start ROMs (and watch them with development tools) to catch missed nasty behavior.


    Now that I have shared this idea, I expect it to pup up in open-source "developments" without giving credit, as usual...


    From the first days of working Tech Support at GVP, I learned many rules of the world. Among them: 1) The beatings will continue until morale improves, and 2) No good deed goes unpunished. 30 years removed from the support trenches, after watching many devs of the 1980's and 90's (OEM/3rd parties) learn the hard way, I passed some good advice along to the 'new' crew of devs a few years back. I got completely flamed for suggesting they avoid stepping in the mistakes of those before them. After a few rounds of cutting their teeth from customer angst on early designs, low and behold, their close cadre eventually convinced them to do it that better way.


    Rob

    I'm pretty positive that you know what happens next when the seven slots in 0xE80000 space are exhausted. Especially since you can equip every Amiga with 030(+) CPU boards these days, those 8 MB Expansion space at 0x200000 offer plenty of room for additional I/O cards. This allocation policy remains in effect since the C= days and is highly likely to stay (IMHO) given the intricacies of existing hardware.

    I have several boards that behave erratically when they end up in the 8M space. The Ameristar Ethernet AE2000-100 hangs the AutoConfig process with no driver interaction. The A2090A hangs Autoconfig with it's driver ROMS active. The GVP Impact HC2 tosses driver requestors for that errant board when the driver tries to communicate with outside the normal I/O space. I assume the HC2 HW doesn't decode something properly with it's 8M programmed address. So far the other Impact cards I have don't have an issue, nor does anything with a DPRC, but I don't have every variation. I still have more 3rd party boards to put through the 'do they operate with their software' type test once they land as overflow in 8M. I expect more to make the list once the driver software is talking to them. I still think this 8M space landing point is a messy solution from way back. Halting (with no config chain entry) when it is full I think is still better if Shut is not an option, and we can't find a better solution.

    One thing I want to try (and it would require that special expansion.library / tool to test) is to test the behavior of boards placed in the A0.0000-B7.FFFF space. I know there are possible pitfalls there, but collecting the info could be useful. I also want to see if it's possible to 'graveyard' the smaller I/O cards somewhere in that space (or maybe elsewhere) as a possible way to shut them up when they technically don't support Shut. All of my random tests had them landing in the low $20.0000 range, or just above a 4MB RAM PIC at $60.0000, but they might decode enough to reach that area.


    I understand from what thebajaguy is writing that some 040/060 cards do not play nice with the reset command - can't help that, of course. However, the main takeaway from this discussion for me is that I should add a feature like "delay a few cycles after $e8.0048 write" to the ACA2000 nice-to-have-list in order to minimize possible problems with fast A1200 accelerators that will be connected to the ACA2000.

    This would probably be a safe thing to implement (delay cycles in some compatible form). I think the solution the OS uses currently in some places is to hit select safe places in Chip registers with useless read or write activity to gain a time delay to slow access to that hardware.


    Also, your earlier comment on my reply about turning off (or inhibiting) the Instruction cache is a good idea. I think it was mentioned back in 3.1.4 testing days that Inst Cache is enabled by the OS fairly early in the Exec startup code for all CPUs. My argument to leave it off until after Expansion is done with it's initialization was not strong enough at the time.


    Also, on the comment about slow HD's, I spent my entire time at GVP in the SCSI disk trenches. That pain is still there 30+ years later when I use 'other' controllers, or what was even used prior to gvpscsi.device v3.12. We had those exact 'slow drive' issues in the earlier 1.0, 2.x, and 3.7 Impact drivers with the bus scan timeout length. Between Ralph and I, the concept of boot from first available disk with a bootable RDB came about. The same goes for the interface tool for bus rescan of the pokey devices, and other bus communications adjustment options that are in the later 4.x and GuruROM. There are viable solutions to things if a few minds put the effort in.


    I do have an ACA 1221lc that I plan to make use of on an ACA2000 when it comes out. I see the potential uses for the ACA1221lc in the A2000 for the reasons you note. I have an A1200, but the JAWS-II 68030/882/50 won out there for it's reasonably good performance. I also want to keep that system mostly static.

    This would actually add $e8.0000 to the available space, at least for all of my cards. Reason is that the complete memory map of my designs always appears on the whole 64k space, and IO/ROM is also visible - in addition to the config nibbles, which also need to be seen at the target location, otherwise ROMs can't be executed.


    So just "not configuring" will leave it in $e8.0000, and the driver may see it there if an AddConfigDev() is executed anyway. Not exactly a clean solution, as a write to $e8.0048 will break it, but the MMU granularity of the 68030 is small enough to keep software from doing that (68040/060 has larger pages, so it won't work on all my cards).

    I actually asked Thor about this as a solution back in 2018, and there are 2 issues. 1) Writing the board config address to $E8.0000 will trigger the next card to appear at $E8.0000, so that can never be done. 2) The abandon effort at $E8.0000 could not actually insert the card into the OS config chain linked list with a valid entry - which is where drivers should always look. The reason it cannot be used there is because only 64K of the card is allowed to be 'visible' when seen in $E8.0000, and any 128K card (or larger) PIC cannot fully show themselves to be any larger than 64K at config time. I know this because one of the early GVP HC Rev 3 cards' PAL packages (2 PALs) which are sized as 128K PICs had to be revised to do just that. If the unconfigured PIC at $E8.0000 responds over any of the other already programmed cards' address space at $E9.0000+, you have a collision situation. Larger shared memory boards also have to behave properly at config time.

    To be honest, I would have ignored that as well, as it's a very special application that can be achieved with a one-off modification to a daughter board. I have done a similar mod to an A4000D daughterboard for easier production testing and assigning of the MAC address of the X-Surf-100.


    Further, a modern motherboard shouldn't have jumpers where a soft-config could do the same. Imagine the pethora of jumpers that an ACA500plus would have for all the menu options!

    This was to be a closed jumper (default) that could be opened by a developer/tester, and pins added/jumper placed over it if they intended to do board development and/or troubleshooting work. Between the prototype run and final run, the effort would have been trivial. Some added other community wish-list items that took them more signal-routing effort.


    All GVP boards have a jumper like this on them, but not all other developers do something like this. The jumper would have to be placed next to the CPU slot and/or on the riser where the signal enters the first slot. The wiring to the pin is default grounded, so the first PIC is always wired 'on', and it's the only way to hold off some CPU slot cards from Auto-configuring.

    I agree, modern-era motherboards would cover this in it's chipset/BIOS Setup. I was just setting up a Dell workstation supportive of the XP-Win8 era - I need the parallel port on a DOS-capable host for an even older chip programmer of late 1980's vintage (for Mach130s). It has PCI-slot disable, port-address selection, and also similar motherboard resource toggle options.


    That sounds like a patched Kickstart, which is not too difficult at this point (thanks ro ROMulus and REMus). A mixed approach to your debugging wishes would be to patch expansion to only configure a single board and then stop. That single board could be a development engine, which has it's own ROM that will continue the autoconfig process as you described. The card could be moved into different target/test computers, so the modification would be only in parts that require no soldering.


    I have a slightly larger purpose in mind (if you are able to review the beta bug / enhancement tracker). There are a few other things a reduced-function expansion.library would provide HW/driver developers and even the OS beta team. It would make testing of hardware (mobo and PIC) hardware and drivers a lot easier without bringing up the entire expansion system, or having to have a new ROM built/burned each time. One day we will run out of erasable 27C400 EPROMs. I have yet to reliably program the modern EEPROMs that supposedly mimic them, but there is some laziness on my part to get to the bottom of that, too.


    A dev expansion PIC that blocks/manages the CFG_In/Out signal is an interesting idea. It would certainly help Z2/Z3 expansion slot developers - maybe if it sat in the CPU slot or the first expansion slot. I guess it could hold some extension Flash ROM, and that could provide step control that manually finishes the config process. A software tool that does similar would be a useful companion to that, too.

    My vote would therefore go to making new definitions to "reserved" bits that provide additional information to a new expansion library, without confusing older versions of existing expansion libraries.

    Definitely a better option.


    No. There is an easy way to delay an access (OVR and giving your own DTACK), and that allows for infinitely-long delays. The first DTACK from a $e8.0000 read can be delayed until the PIC is fully up and running. Logic for that is minimal (=very cost-effective) and trivial. A solution to this problem is already in the architecture. Providing yet another solution will break compatibility with older systems, which is not what current developers want.

    That is one answer for new PICs. At the same time, I think there are still power-on race conditions affecting the current batch of boards in the field with the new/faster CPUs. We can't go back to fix what is out there.


    Combined with very fast CPUs, the older/slower LS-class TTL device propagation times are getting pushed. A 7MHz 68K had no issues with the speeds. 50MHz 68030's were still reasonably safe. The fast 68040's and 68060's with large instruction caches and fast memory have created hardware race conditions.


    I've never seen a developer spec for when hardware expansion PICs must become ready when their CFG_In is signaled defined in the HRM documentation - because it wasn't needed back then. Now the OS has to chill for a respectable period of milliseconds at the start of expansion config, and between each PIC, to allow the elder chip gates to transition.


    In theory: A PIC is programmed by the OS in the CPU slot (it better be ready), drops CFG_Out, the 74LS32 mobo logic across the slot(s) has to transition (worst case - CPU slot out, to each slot-crossing gate, into slot #5) and the PIC has to become ready on the bus as _CFG_In signal triggers it. Board makers may have used LS gate parts and 15-25ns speed PALs at the time. The OS has to wait reasonably for the slowest hardware.


    My influences back in 3.1.4 resulted in some delays added between board read and write processes to the configuration sequence. They helped the symptoms seen a bit (the A2091 oddly was one that improved). I am just not privy to the actual OS code that is doing that work. I think there are a few more issues to stomp. I'm mainly suggesting the OS wait a defined time between expansion.library init and starting the hardware-bang process, and also for between each board. The PIC hardware signal controls you suggest can be compatible with that without detrimental boot delays.

    I'm looking for a better solution to a problem that already exists. I am aware that solution may not exist.


    Right now, this problem has an ending which is messy. It's like the old cartoon where Wile E Coyote runs off the cliff, and despite the contingencies he prepared for, each fail-safe fails, and the rock or whatever is following him over lands on his head/explodes.


    The most swift and sure way to end the problem now is to never 'place' the last PIC which has no place to go, and just exit. That will have to be an expansion.library rework because it's obviously not doing it today. It currently tries to put everything somewhere until it can't find anything more to configure.


    There may be a little more that can be done, but we need more evidence. There has not been a good way to get the evidence (yet) of how each card deals with the end of space, but I've had an idea on that for awhile.


    Jeff Boyer, back in GVP days, showed me that if you stopped Autoconfig (always held your board CFG_In line high) Autoconfig will never see you, and the HW will never pass the _CFG_Out baton on to other downstream cards to also be seen/placed. The native config sequence passes all expansion boards, the OS continues on, presumably booting from what is available, and drops you at the shell. Back in the day, that was a floppy. Motherboard resources today give us a few HD boot options.


    From here, you have the ability to bring up the tool Mon, and poke a value into your board offset at $E8.0000, and force it to an address (or even force a shut). You can poke some more at it with Mon for testing, or call an executable, or load a prepared Binddrivers-loaded driver to run against your PIC if you placed it in a healthy place. It's the OS 1.1/1.2 days again, yes, but from a development perspective, the simplicity has its uses.


    Next, Ralph Babel taught me it's possible to fill the OS's board config chain, which is a linked list, with software-generated 'boards' as needed as long as the chain's is not broken, and the board record data is within spec. The I/O Extender on the G-Force board gets built this way over the same PIC space as the 64K DPRC. FWIW - AddMem activity takes the shortcut of inserting a known RAM space into the memory list, something several developers do. This is the same concept, just with PICs.


    I see a need to do both sequential (but manual) PIC placement, and a pseudo-fill of bogus cards, for this kind of full-config testing. I tried to get the motherboard cloners a few years ago to add a single jumper on the first CFG_In board slot to their replicas, but their myopia ignored my requests. This would have negated the need for the next option.


    The next step I see is we would need a development expansion.library which never processes Autoconfig boards, only motherboard-detected RAM and disk interfaces. Then, a tool which allows several options. 1) pre-insert known 'hurdles' in various machines (PCMCIA footprints being one), 2) lets you insert fake boards to fill various expansion address slots at will, then 3) allows single physical board placement via defined logic for the expansion region, and 4) allow address overrides of a PIC's destination (if desired). I'd even like to be able to test each expansion region by defining the placement order top-down, bottom-up, and even a hybrid 'split' in the middle placement in each direction (kind of like z3 population became). The tool could do the logic work to test the PIC placement engine logic.


    Once we have this kind of tool, better decisions on what to do can be made.


    Another possibility in all of this is we have 2 unused 'bits' in the expansion spec card types - At Reg 00, the settings for 00 and 01 are Reserved (p432 HRM 3rd Ed.), with 10 = Z3, and 11=Z2 already defined. It's possible that we could define an additional card type which can identify support for placement into the latest expansion spaces. That would allow currently active (and new) developers with the ability to update their hard or soft logic more avenues to explore.


    One other thing I see a need for is a longer 'delay' before expansion runs against the PIC placement. This would allow cards like ZZ9900 a longer chance to soft-init from flash media. I think he has been chasing a problem where he can't completely software-define (load) his image from flash at power-on before Autoconfig is knocking on his door. His network card function is sometimes unavailable at cold boot. He's fine with warm-boots as he decouples what happens after _Reset is triggers. This would help other developers with on-card soft logic loads at boot time.

    Noted on the 5 slots. It's actually 6 slots (5+CPU). There are the 128K PICs out there, too. I'm on the hunt for them. I own 4 of them - the GVP Rev 3 Impact HC (3x), and the Impact SCSI+8. I suspect the painfully old SCSI+RAM 1/0, and possibly the two models of the SCSI+2/x cards, too. I'm fairly certain the Impact HC Rev 4 and Rev 5 cards became 64K-sized, and I know the HC2 I/O PIC is 64K. I know the A3001 cards were 64K for their AT interface, and DPRC was common after that, and is 64K. I have a Buddha (20th) and of course the x-Surf 100, but not the earlier ones.


    One thing that could fix those situations (for active developers) is that if the board is in an unsupported address spot (in the OS Config list), the driver software could be revised to note that the config address of the board is not supported, and that re-arrangement of the expansion slot position is needed.

    One of the 'natures of the beast' in your favor is slot order. If your ACA's are always going to be first, there's no issue with getting your preferred slots in the $E9-EF config space, or even the 8M space. The only full-homework assignment is when you are in the general Zorro slots, or somewhere downstream (in general) from the 'first PIC'. I am sure that any Z2 accelerator with AutoConfig 2/4/8MB RAM is going to plan for $0020.0000 RAM placement. The same would apply to any I/O PIC with a software ROM on it. Accelerators on the 24-bit systems can pretty much count on >16MB address space being their own for that same reason. The only nod to interest above 16MB is on the A1200 with OS 3.1+, with it's A3000/A4000-like CPU slot memory detection bonus code. Hindsight being 20/20, it would have been nice to have the OS find all memory on those accelerators - as ROM-added memory is 'late' to the game and leaves OS structures in ChipRAM if there is no AutoConfig-type FastRAM


    The post-C= era 68060 boards can get away with the $F0.0000 ROMs to fix the FPU, and insert their other I/O interfaces in that 512K 'diagnostic/cartridge' spot for that same 'I'm first' reason.


    I have thought of the 'double-pass' for AutoConfig, too, but that will also need to get some testing, and I see initial problems. I already know that the A2000 TekMagic cards need a Ctrl-A-A kick from a software reset perspective (LoadModule hits this problem). The 3.1.4+ install processes avoids the attempt to call a software reset command on 040/060 systems for this and other card's similar issues. My understanding is the reset signal hits from the CPU instruction execution, but the board doesn't react fast enough, and hold the CPU from taking off, until the board completes it's actual hardware reset.

    Thanks Henryk. I agree it needs careful efforts and to involve developers.


    I'm currently working on a logic proposal that would populate the PICs for the Z3 space from the top-down (instead of classic bottom-up). This would avoid going below $4000.0000 except in the worst-case expansion scenarios where a 1GB or a few 512M/256M cards are present. Only the last cards in the order are subject to the less-happy? memory spaces. With better sub-allocation of the smaller boards (which is something that we know is missing - unless they are sequentially ordered well), the chances are much reduced of an issue. Re-arranging the boards would help in most cases.


    I have other things in the proposal that look to address oddities I have found. I also have a test/developer idea or few.

    Noted, and thanks for the effort into looking at it.


    My assumption is that the OS (expansion.library) is processing each generic PIC while it resides at the $E8.0000 base config address (or the other Z3 high address). From there, it's either programming the card's destination address, or writing to the $4c ShutUp trigger (if supported). I have multiple A2091 and GVP DPRC cards which seem to behave as that (Shut?=Yes) feature intended. Older boards, with discrete TTL components, at a real-estate premium, and/or aiming for the most cost-reduced design, probably covered the Shut? corner case with the least amount of logic, and left it unsupported (ShutUp?=No) fairly often. I have several older cards from both noted sources, and other makers, which are like that. It's a valid solution in my opinion.


    I am going to follow up with the devs on the OS end, too. I know this dark corner of the Amiga OS wasn't deeply vetted with modern CPUs back in the OS 1.x / 68000 days. It was extended in only a basic effort for the A3000 in OS 2.x for Z3 PICs, with very little testing, and none apparently into the worst-case scenario (a full PIC environment) situation. It seems even the OS 3.x efforts that came later for the AGA systems left much of the 1991's 3rd Edition of the HRM's defined extensions unbuilt, and no further slotting refinements were added. For the OS 3.1.4 efforts, a number of fast-CPU HW race conditions were addressed in the dev efforts (I found them with fast 040/060 boards). They also addressed some address/register read/write overrun edge cases. The expansion-RelNotes cover that detail. Nothing was done to address the overall card-population logic engine.


    What the OS does with that no-shut situation remains a question to be answered, too. The card needs to either be sent somewhere 'safe', and made unavailable to it's driver/software (pseudo-shut) in order to keep processing another possible card for another expansion pool with space open, or card processing has to stop with that board and left unconfigured (at $E8.0000). I am under the belief that the OS does some kind or the former at this point, with some possibly odd results. It needs some further thought

    Hi Jens,


    I might have found something amiss with the X-Surf 100 AutoConfig Shut mechanism. It's not a show stopper, and more like a corner case, but maybe if you have time to look at it. This is part of my testing for the post-3.2.1 Amiga OS AutoConfig. Low priority, in my opinion:


    I fill Z2 AutoConfig with a GVP G-Force 030 with AutoConfig RAM and the DPRC PIC.

    - 8MB AutoConfig FastRAM at $20.0000-9F.FFFF

    - 64K PIC for DPRC at $E9.0000.

    I fill the next 3 Z2 slots with some early GVP Impact cards that I know are 128K sized PICs. That covers:

    - 128K PIC at $EA.0000

    - 128K PIC at $EC.0000

    - 128K PIC at $EE.0000


    Expansion space is now full - with 2 open slots in the A2000. X-Surf 100 becomes the next card in slot 4, and obviously there's not a place to put it. Power on, the LED goes bright, and AutoConfig hangs here. Not possible to get to an Early Startup menu. I know it's not getting to the point where the upstream cards' driver ROMs would activate. I have dropped in several other cards which support Shut (and the OS shuts them) so I know what should happen. I have some cards that cannot Shut, and behavior is similar to this issue. X-Surf 100 is supposed to support ShutUp (the flag is set, based on ShowConfig's debug output).


    If I reduce FastRAM to 4MB AutoConfig, or remove the 8MB FastRAM PIC altogether, I have found an OS AutoConfig bug I'm reporting. The OS will drop any I/O-space card normally intended for $E9.0000-$EF.FFFF into the spare 8M space. X-Surf is ending up here in this situation, since in this scenario there is 'space' left in the 8M area. The flag for Put board into 8M space is set to No, so the OS shouldn't drop it here. I have some other I/O-space boards that balk at this 8M space destination, too (drivers crash/LED flash, or toss requesters with issues) so it's something for us to go after in expansion.library down the road.


    My query is therefore: Can a check for what happens to the board when Shut is programed by the OS be done? Does the board actually accept it and 'disappear' until next reset? Will it pass the Config_Out on to the next board? or not? I have other I/O board examples where AutoConfig downstream is denied - CFG_Out is never passed to the next slot, leaving all boards downstream out of luck. I also have them where CFG_Out is allowed to go on to another slot board (and if there's no place for them they get Shut/or worse if they aren't as well behaved).


    There aren't any tools at this point to figure out what the OS is doing that early, but I have enough cards' behaviors under my belt to think the X-surf board isn't doing what was planned. It's a fair bet that most developers back in the day didn't have a way to simulate/test a full expansion space, either.


    A note that I also checked 3.1, 3.1.4 and 3.2 release ROMs - same behavior.

    Actually, GuruROM v6 will use the following priority for DMA transfer buffers (with CPU copy-up) when one is needed for transferring data to/from targets >16MB Zorro II address spaces:

    - Allocate a 16K buffer in FastRAM pool which has the 24-bit DMA attribute.

    - Allocate a 16K buffer in ChipRAM pool which always has the 24-bit DMA attribute (but subject to Agnus taking bus priority, so slower).

    - PIO - This option will never apply to any A2000-based use case. Mentioned for completeness only.


    This prioritization will get tripped up by the C0.0000 remap tricks (as noted in the thread), as native C0.0000 memory range is normally tagged with the 24-bit DMA attribute. The driver also assumes all target addresses <16MB are natively 24-bit DMA-accessible in an A2000.

    If you have a GVP Series II SCSI card, the FastROM (gvpscsi.device) v3.12 and the later 4.x series driver also has the built-in code to support CPU copy-up through Z2 24-bit DMA-flagged system RAM buffer. This is just like GuruROM v6 handles it, and GuruROM is gvpscsi's later cousin.


    Set the DMA mask in mountlists/DOSDrivers or partition boot blocks for the above to it's full value 0xFFFFFFFE, and the driver does all the transfer work. If the mask is lower (Hint: HDToolBox defaults to lower value as a fail-safe), the filesystem tries to do the work, and intervenes before the driver sees the data buffer address to pick up or drop off at, and everything runs much slower. If you have OS 3.1.4 or 3.2, this detail is covered in the provided FAQs (I wrote them).


    In all cases, it's best to have SOME AutoConfig Z2 FastRAM memory that supports DMA (located in $0020.0000-009F.FFFF) in the system. Your A2630 base 4MB AutoConfig memory suits this need well.


    Performance is limited to about 1/2 of the Z2 bus speeds, with the CPU's performance to it's 32-bit RAM, and the device and SCSI bus performance additional +/- factors.


    Disclaimer: I also build the GuruROMs.

    For an A2091, you want a GuruROM.

    I'm in agreement about the testing being a challenge.


    I have simulated in hardware several of the messy expansion-overload scenarios. Finding a solution that best covers the hairy-edge of physical expansion - be it by modern active developers, or what was done by the developers of the past , will take effort. FWIW - One of the scenarios before the ROM for 3.2.1 was made was my Z3 ZZ9900 had ended up at $9000.0000 in one of them. I couldn't test the quick fix they used before the release due to personal travel/work activities, but the ugly Z3 card processing in the code was acknowledged then, and was noted it will need a revisit.


    I am being mindful of your expansions in this - My hope is to get the physical systems with as many as 6 physical PICs (with chained config possibilities) addressed as best as possible. The runaway z3 config (upwards in address), the sorting of z3 cards into available smaller spaces (like z2 does), and the expansion space-filled / no shut scenarios need some future attention, though.

    AutoConfig of a Z3 board at $8000.0000 should not be possible. The Hardware Reference Manual 3rd edition marks all space >2GB line, sans the Z3 board detect/config address, as reserved. The Z3 expansion space map is on p387.


    I found the Z3 $8000.0000 AutoConfig bug by overloading the expansion space on an A4000T. I reported it for 3.2->3.2.1 but I don't have the details of what was done, either. I had the impression what was done was a band-aid, and would be looked into again in the future.

    You have overloaded Z3 from the $4000.0000 start point upward. One of the bugs I found is that it never uses the Z3 high address space from $1000.000-$3FFF.FFFF. I've suggested they maintain the build-up sequence from $4000.0000, then build-down from that address to avoid potential issues with some customizations..


    One of the other bugs in Z3 is the inability to 'sort' smaller cards into the available address slots (like Z2 can with 64K/128K I/O and 512K/1M/2M RAM/Shared memory space cards in the 8M space). That was how I found the bug noted above.

    Your solution for now is to back off the large memory expansion configuration that results in any board being placed at $8000.0000 - at least any that causes you trouble.

    My opinion: There is a need to revisit both Z2 / Z3 AutoConfig in the future, as there are worst-case expansion-card situations that were never able to be tested back in the day. I currently have enough physical cards to overflow either the Z2 and/and Z3 expansion spaces, so I hope to help them sort it in a future update.

    Jens is probably well aware of this, but sometimes users aren't always aware of SetPatch's behavior in different versions of the OS (below is only within the context of 68030's and MuLibs - other CPUs differ):


    - SetPatch in all 3.x versions will turn on the 68030 data cache by default without any CPU library present.

    - SetPatch <= 3.1 does not load any MuLibs CPU library components unless manually patched for the MuLibs environment.

    - A patched Setpatch, when called, then tries to load 680x0.library, which will trigger a 68030.library load.

    - The CPU library load will also check ENVARC:mmu-configuration for specific options and user-defined environment settings


    Your installation of MuLibs may have/may not have set the mmu-configuraton file in ENVARC: to certain ACA-friendly settings depending on your installation answers. This may be important information.


    The mmu.library is called presumably only when some aspect of the MMU configuration table is read or modified by an mmu.library-aware tool. I don't believe it's needed at the library load time, but will be later if you use an MuLibs-aware tool.

    I'm going to suggest getting the output of the MMU settings by using the MuSCAN tool. They may indicate an unexpected setup that Jens might spot.

    SysInfo tends to be very CPU-clock/cache oriented in it's 'benchmark' of CPU performance. It therefore must be taken with a few grains of salt. Unless you have equally good memory performance, the bias it gives raw clock speed is at times misleading.


    Benchmarks (in general) also tend to bias sequential-oriented memory accesses - something the 68030 can accel at if it is supported by an efficient burst-access memory design. Random memory access and I/O, however, tend to defeat the cache, and overall memory performance becomes more important.


    That said, one should take into account a memory bus-speed test (bustest by mlev on Aminet is one), and also the AIBB tests offer some better 'system intensive' tests that show how different hardware configurations can affect overall performance.

    As I read the detail for enabling 2MB ChipRAM, the decision to re-use the JP7 pad and pin path onto the expansion connector may be your solution. It appears that is the address line enabling the additional 1MB of RAM (0010.0000-001F.FFFF). If it's disconnected from to the expansion card, the RAM on the card won't respond, and you will have only 1MB ChipRAM. The size of that memory is detected by Kickstart memory test routines upon early power on.


    Assuming the above is correct, you might get away with a simple toggle switch on that line as long as the lines are short and it doesn't pick up any electronic noise. Otherwise, a bit of gate logic, connected to an actual switch, that allows or prevents the address line signal from leaving the A500->expansion would, in theory, do the same thing.

    And if the 16MHz A3000 is equipped with G-Force 040?


    ...then it should be no problem to set the jumpers to 25MHz board clock - that will also increase Z3 performance.

    Agreeing on the performance boost using 25MHz vs 16MHz, and RAMSEY access will likely improve, but I believe the CPU/FPU will still get those faster clocks, no?

    That uses standard Z2 autoconfig, but with the trick of claiming the $e8.0000 accesses for the memory card. With that trick, I sneak myself before any other Autoconfig card, because the main board will not see the access to $e8.0000 unless I let it. Unfortunately, I cannot push myself into the 64k space that overlays the Kickstart, hence the limitation to only "MapROM" the physically installed ROM. So yep, this is hacky, but avoids manual addmem in startup-sequence, and actually lets expansion (and other early stuff) reside in 32-bit fastmem, as I'm also mapping to trapdoor space. But.. we're side tracking..

    I had suggested that idea (hiding the motherboard Z2 resources until after config) a few years ago to another developer on the forums when they were just learning the old lessons and pitfalls of Z2 AutoConfig and FastRAM with a 24-bit motherboard and a 32-bit addressing CPU. I still have burn marks from being the messenger, but they (and others) eventually chose that route. You probably pre-dated me on figuring it out, but good to know. The page-zero theft at startup is another good one.


    I have the HRM (the physical book!) here - 3rd edition, along with the RKRMs (all three of them). One of my very first eBay purchases ever, before Y2K! Couldn't find that specific sentence browsing through the Autoconfig/Z3 chapter, but I didn't do a thorough search. Would be nice if you give me a page#, so I can quote that to anyone who says I deviated from the specs by leaving out A31 from the X-Surf-100 address storage register :-)


    As for the A3000D memory map, Appendix D, page 315 (English version). Zorro III Expansion: $1000.0000 - $7FFF.FFFF. Diagram on the next page.


    As for my comment to also use the space below $4000.0000, I'd like to see that only being used when an upper boundary has been reached. Reason is that my minimal Z3 implementations on the A1200 accelerators are using no address latches at all. Since I know that my board is the first in the autoconfig chain, my address comparator is fixed at $4000.0000 for those cards. This will of course break if the start address of a hypothetical new version of expansion is lowered.

    I will make sure it's kept in the mind of the devs as they hopefully address it. Expansion.library was something Thor was very delicate with after I showed up on the 3.1.4 beta team with my laundry list of concerns. The initial transfer to SAS/C compiling from the old Greenwood days apparently gave them headaches when it came to working on the hardware directly. Some things were better done with inline 68K or needed some compiler optimizations worked around. Getting him to add some more hardware delays so fast CPUs did not race past multiple slow LS-rated TTL gates on older cards was all I could hope to get back then. I still have hoped they would open up the space above $009F.FFFF as overflow when the I/O space at $E9.0000 is full, or a small enough card might fit from the 8M space. It's reserved on the 68K machines, but it's actually more I/O space defined on the A3000 - also not implemented in Kickstart 3.1.

    I have asked them awhile back to consider supporting the Z3 expansion spaces on Z2 hosts for z3 cards that appear at the Z2 config space (as A3000/4000 supports, if my memory serves). It would help any of the current CPU card makers AutoConfig their RAM and other resources - being the first card to config, they might have an option as to where they want to be placed - maybe by expressing the base address(es) as an extension. I think the correction to expansion.library mainly applies to the big-box Amiga hosts. I don't think the A1200 needs any further tweaking unless we are talking officially supporting an extended 512K ROM at $E0.0000.

    One thing I'd like them build into any further update is the ability to test for and custom-add the extended ROM space at $E0.0000 - plenty do custom ROMs, and an official support option would be nice (not unlike the $F0.0000 hook). I'm thinking of the people that put them in towers, and that would open up official options for them anyway.