Posts by thebajaguy

Caution: Non registered users only see threads and messages in the currently selected language, which is determined by their browser. Please create an account and log in to see all content by default. This is a limitation of the forum software.


Also users that are not logged in can not create new threads. This is a, unfortunately needed, counter measure against spam. Please create an account and log in to start new threads.

Don't Panic. Please wash hands.

    Also, the $00F0.000 is a software insert by the accelerator card. It's the trick the post-C= engineers used to tame the 68060 FPU (The legacy diagnostic / cartridge address that Kickstart checks very early on at startup). It's certainly not 8MB. It's actually a 512K 'card' at best, and the FPU taming code is less than 2K when I went looking for it years ago. The cards that have SCSI interfaces put their registers in that zone, too. The A2000 040/060 TekMagic cards also do their AddMem there. I had thought you might use that kind of trick for the A2630 memory card. It's unofficially a viable solution for any CPU slot card when you have to 'fix' things very early on.

    Sorry on the dots - I tried. I'm going to try to make it a habit to use them. I could also use a photo of the screen, but didn't want to have to take the battery case off (the data connection is iffy from the wear of daily recharging). I also despise this chicklet keyboard on my laptop - I spend too much time editing, but my desk /workspace is limited.

    I agree the 'gap filling' for cards that are able/willing is something that should always happen - but sometimes doesn't. I've tried to see what the HW RKM says about it, but the last time I read through the pages, it was kind of vague. I do recall there's unimplemented z2 expansion space mentioned that never made it into the 3.1 ROM, but is documented in the 3rd book series. The Z3 space is like the final frontier it's so big, but something just caught my eye in the Amiga memory maps - Nothing should be placed above $8000.0000! Zorro III space is $1000.0000 through $7FFF.FFFF, and above that is all reserved except for the 64K slot at FF000000-FF00FFFF. The fact that a board can operate at/above $8000.0000 is just chance. I think AutoConfig should be using $4000.0000 as the anchor point and placing boards above and below, but it's not.


    I'm still searching for the A1200 discussion, but got sidetracked by someone nearby needing their A2000 given some TLC - they still use it for Audio and video work.


    I'll be linking this discussion so they can review what has been mentioned. I don't have source access, either. I'm only a beta tester.

    I decided to mix up the card list, putting the ZZ9900 first, and now I ran into a [BAD] on the previously always good 50MHz clocked BigRAM+ when in slot 5 at $9000.0000. The other cards I have sprinkled in after the ZZ9900.


    1, z2, 2011, 32, $00E9.0000, 64K (I/O) (Accelerator I/O port $E9)

    2, z3, 28014, 4, $4000.0000, 128MB (I/O & RTG Shared) (Slot 1a)

    3, z3, 28014, 5, $5000.0000, 256MB (RAM) (Slot 1b)

    4, z3, 4626, 100, $6000.0000 (I/O), 64MB (X-Surf 1000) (Slot 2)

    5, z3, 3643, 32, $7000000, 256MB (RAM) (Slot 3) (66MHZ)

    6, z3, 28014, 1, $8000.0000, 32MB (VA2000) (Slot 4)

    7, z3, 3643, 32, $9000000, 256MB (RAM) (Slot 5) (50MHz) [BAD]

    8, z2, 2017, 22, $00F0.0000, 8MB, OK (CPU Card $F0.0000)


    This swings the questions back into AutoConfig / expansion.library. Bug 2186 opened to see if they will investigate. E-mail was also put into Lukas, but did not yet include this reply's additional findings.


    I then pulled the BigRAM+ out of slot 5, inserted the GVP Spectrum z3, and it gets placed at $8200.0000 and z2 $EA without issue.

    Something I think is amiss in AutoConfig. The board size may be playing a part in this.

    I will put a query into Lukas to see if there may be something regarding the A31 decode.


    They have not released anything to the Beta team since the RTM 3.2 CD was defined, so I don't have anything to work with yet since the 3.2 release. I also haven't been able to locate a bug that matches the keywords expansion, autoconfig, or V47.97 or a few others. If you know the number, I'll be happy to look it up. The last two e-mail discussions on the expansion.library since release was the A3000 Superkickstart/030/MMU handling and the no-floppy drive drive / no boot situation.


    I have a VA2000 I'm going to tryin the mix next.

    Thanks for the reply.


    Bot h 1) and 2) were typos on my part. It was getting late and I was doing some copy and paste.

    1) Yes, the period is a good idea. Add one more '0' is correct. I made the same short-address mistake in the second board list, too.

    2) The board on slot 2 was at $4000.0000, and the board in slot 8 was at $7000.0000.


    I had mentally noted the z3 boards all lined up on major address boundaries this time. Therefore:

    3) I spotted another typo on my part in the second board list. Board 6 should have been $8000.0000, not at the same address as board 7.


    I am positive there were no duplicate addresses for board. All were incrementing over the previous z2 or z3 location. I have fixed my typos and reprinted the board order below:


    First board list (corrected):


    1, z2, 2011, 32, $00E9.0000, 64K (I/O), OK (CPU Card)

    2, z3, 3643, 32, $4000.0000, 256MB (RAM), OK (Slot 1) (66MHz)

    3, z3, 4626, 100, $5000.0000 (I/O), 64MB, OK (Slot 2)

    4, z3, 2193, 1, $5400.0000 (RTG Shared), 2MB, OK (Slot 3)

    5, z2, 2193, 2, $00EA.0000 (I/O), 64K, OK (Slot 3)

    6, z3, 28014, 4, $5800.0000, 128MB (I/O & RTG Shared), OK (Slot 4)

    7, z3, 28014, 5, $6000.0000, 256MB (RAM), OK* (Slot 4)

    8, z3, 3643, 32, $7000.0000, 256MB (RAM), OK** (Slot 5) (50MHz)

    9, z3, 2017, 22, $00F0.0000, 8MB, OK (CPU Card)


    Second Board List: (corrected):


    1, z2, 2011, 32, $00E9.0000, 64K (I/O)

    2, z3, 3643, 32, $4000.0000, 256MB (RAM)

    3, z3, 4626, 100, $5000.0000 (I/O), 64MB

    4, z3, 3643, 32, $6000000, 256MB (RAM)

    5, z3, 28014, 4, $7000.0000, 128MB (I/O & RTG Shared),

    6, z3, 28014, 5, $8000.0000, 256MB (RAM) [BAD]

    7, z3, 2193, 1, $9000.0000 (RTG Shared), 2MB, OK

    8, z2, 2193, 2, $00EA.0000 (I/O), 64K, OK

    9, z2, 2017, 22, $00F0.0000, 8MB, OK (CPU Card)


    Kickstart is OS 3.2 release ROM for both A4000 systems. 47.96.


    Upon powering on the system to grab the Kickstart version for you, Board 6 in the second list of slot card lineup, (the ZZ9900 256M PIC) reported bad again.


    I have a weekend work event, so I probably won't poke at this again until either later today or tomorrow.


    Thanks

    Understood. I've spend many times explaining how clock speed in an async accelerator-bus mating environment matters to access efficiency.


    Ok - I desoldered the 60MHz mini can, and tested a 50MHz, 60MHz, and 66MHz using a machine pin socket.


    50MHz, 60MHz, and 66MHz worked fine in the A4000D. The 50MHz board configured first and the socketed board followed it. The bump in the Bustest values took it near 9.8-10MB/sec with the tweaked MuSetCacheMode set to WriteThrough.


    The 50MHz and 60MHz clocks produced the same results as before (Early startup, Bad) regardless of slot position in the A4000T/060. The 66Hz clock had some success.


    Accelerator z2 I/O PIC
    Slot 1 - BigRAM 66MHz (256 z3)
    Slot 2 - X-Surf 100 (z3 I/O PIC)
    Slot 3 - Open

    Slot 4 - ZZ9900 (128M+256M z3)

    Slot 5 - Open

    Accelerator $F00000 z2 Interface PIC


    All board seen.


    Added the GVP Spectrum z3 card into slot 3 - all came up as expected. It adds a 2MB z3 and a 64K z2 I/O to expansion.

    Added the 50MHz BigRAM+ card to slot 5 - the 128M of the ZZ9900 reported OK, but the 256MB reported Bad (see * below). The second BigRAM+ card did not get seen. I let it sit at the Early Startup as I was recording notes, and gave it a fresh reboot. This time it came up and showed all the cards.


    Map of boards on the Early Startup (All in when it came up good):


    1, z2, 2011, 32, $E90000, 64K (I/O), OK (CPU Card)

    2, z3, 3643, 32, $4000000, 256MB (RAM), OK (Slot 1) (66MHz)

    3, z3, 4626, 100, $50000000 (I/O), 64MB, OK (Slot 2)

    4, z3, 2193, 1, $54000000 (RTG Shared), 2MB, OK (Slot 3)

    5, z2, 2193, 2, $EA0000 (I/O), 64K, OK (Slot 3)

    6, z3, 28014, 4, $58000000, 128MB (I/O & RTG Shared), OK (Slot 4)

    7, z3, 28014, 5, $60000000, 256MB (RAM), OK* (Slot 4)

    8, z3, 3643, 32, $4000000, 256MB (RAM), OK** (Slot 5) (50MHz)

    9, z3, 2017, 22, $F00000, 8MB, OK (CPU Card)


    ** - Did not appear when the card before was reported bad.


    Side by side by side, the 3x z3 RAM cards, with WriteThrough mode set, the ZZ9900 was marginally faster (maybe .5MB/sec) across the board. The faster clock board did better on writes, and the slower clock board did better on reads.


    I swapped the BigRAM+ card positions (66MHz in slot 5, 50MHz in slot 1) - it booted, no issues.

    I swapped the 66MHz card from slot 5 with the GVP Spectrum card from slot 3, and the second PIC of the ZZ9900 in slot 4 reports Bad. The GVP Spectrum z3 and z3 come up Ok.


    1, z2, 2011, 32, $E90000, 64K (I/O)

    2, z3, 3643, 32, $4000000, 256MB (RAM)

    3, z3, 4626, 100, $50000000 (I/O), 64MB

    4, z3, 3643, 32, $6000000, 256MB (RAM)

    5, z3, 28014, 4, $70000000, 128MB (I/O & RTG Shared),

    6, z3, 28014, 5, $90000000, 256MB (RAM) [BAD]

    7, z3, 2193, 1, $90000000 (RTG Shared), 2MB, OK

    8, z2, 2193, 2, $EA0000 (I/O), 64K, OK

    9, z2, 2017, 22, $F00000, 8MB, OK (CPU Card)


    There's still something a little strange about that one BigRAM+ card, although the 66MHz clock has helped it be a little less marginal in the A4000T.


    I did try an 80MHz clock, but I'm not sure of it's health. I didn't get a boot of the A4000D with it installed in the card. Aside from some slower clocks like 64MHz, and several below 40MHz, the 66MHz is the fastest I have on hand.

    Some further testing.


    The 60MHz clocked board offers a tiny, but repeatable, 0.2-0.4 MB/sec performance improvement over the 50Mhz clocked board in both the A3000D/25, and the A4000D with either an A3630/25 card in it, or a T-Rex II 68060/50MHz in it. Both BigRAM+ cards were in the hosts at the same time, positions 2/3 used. I polled 0x4100.0000 and 0x5100.0000 in my bustest addr command. I swapped the card positions to confirm it followed the card. Both systems are stable with both cards.


    I added another z3 expansion card to the the A4000T - A GVP Spectrum 28/24 using z3 mode. The 60MHz BigRAM+ card still disrupts the AutoConfig from where it is placed - Reports Bad in Early Startup screen, cards after it not seen. Moving cards around didn't address the issue. It only moved the failure point to the 60MHz BigRAM+. I will pursue the clock change on that board when I get in the mood to de-solder, maybe this weekend.


    Side note/unrelated: Having learned a lesson on Z2 RAM cards and 68040/68060 bus behavior a few months ago (with Thor/MuLibs), we discovered it good policy to turn off data cache (and therefore block the native attempt to burst-access) to the Z2 address space. It made a performance improvement, and in some cases, the system health improved (repeatable hangs on that address space with 16-bit memory disappeared). I just forced the BigRAM+ card memory at 0x40000000 to use Writethrough mode (default is Copyback with no mmu-configuration file settings made) with the 2x 68060's I have, and the write performance, near 4.5MB-sec, nearly doubled for the WriteL and WriteM values (to close to the observed 68030/25 values). Other measured values remained largely unchanged (margin of error). This was in the A4000D w/T-Rex II. The same behavior was reproducible on the A4000T with the working 50MHz BigRAM+ and the ZZ9900 256MB FastRAM (it offers as Z3 memory). Net result/suggestion: use WriteThrough cache mode for 68060 and Z3 AutoConfig memory added from the Buster expansion bus. I didn't get to test 68040, but I suspect it may be the same. I will send my test results to Thor as it ay at least be an item of mention in the documentation, and he probably will want to reproduce it.


    More data on the BigRAM+ with the issue in the future.

    I'll be going through my small group of Z3 cards in the system and see if I can identify any others that present issues, and I'll also look into the alternate clock on the one BigRAM+ card at this point. I did some decent testing prior to 3.1.4 on AutoConfig expansion saturation (to many PICs for address spaces). If memory serves, all other Z3 cards I have had then behaved. There was also some focus put on HW race conditions related to expansion.library / AutoConfig. There were pokey cards that may not be ready at the default config address when the config signal reached them. I'm not convinced we squashed them all, but we fixed a few potentially marginal situations..


    Thanks for the detail.

    I will probably pursue the oscillator first as I have the parts, and swapping is an easy thing from the range of spare clock parts in my drawer.


    The motherboard is stock clock rate (25MHz), based on the halved native 50.000MHz can. The accelerator uses its own 60MHz clock and doesn't drive any clock off-card.


    I have also been aware that the 'T' motherboards are, due to trace lengths, sometimes a bit more challenged when things get marginal. Maybe the presence of two 60MHz clocks in the proximity is getting picked up as noise somewhere.


    Thanks for the guidance. I'll report back if I get the chance to address it.

    I have two BigRAM+ cards. I just acquired a ZZ9900 and had this itch to see the Amiga memory at near AutoConfig-max, so I put both BigRAM+ in the A4000T. It doesn't seem to like one of the cards.



    Host is an A4000T, TekMagic (ultrasound accelerator model) 68060/64MB/60MHz (Int Clocking) 2MB/16MB on the motherboard, Buster-11 / RAMSEY-07

    Other expansion: X-Surf 100 in the 2nd furthest slot, ZZ9900 in the closest slot (to the CPU area).


    BigRAM+ #1, gTanJ, has a 50MHz clock, and is generally reliable in the system. I have had it in either one of the two slots between the above cards.

    BigRAM+ #2 DTarP, has a 60MHz clock, and is notably unreliable in the A4000T. If it doesn't toss up a Bad entry on early startup, the memory doesn't register and AutoConfig is not passed along most of the time to the ZZ9900. Remove it, and all is back to as expected. Slot chosen really didn't matter.


    Note: - I have been running the card with the 60MHz clock fine in an A3000D/25 with Buster-11 and an AmigaNet card (my software Dev system) without issue. This was the first time I actually compared the two cards side by side. I had encountered the problem some time ago with presumably the 60MHz card, but I never chased it down since I picked up the second card somewhere, and they were working in their respective systems.


    The 60Hz clock part is a smaller can vs the 50Mhz. Neither board looks like it was modified by anyone. I gave it a decent compare under my lighted magnifying glass. Aside from some different numbers on the middle 2 lines of the Xilinx chip, and the clock part, the boards look identical down to the memory chip and all passive parts.


    Any guidance regarding the picky card? I'll probably leave it in the A3000D, but if something can be done about it, it would be nice to know.


    Edit: The 60MHz clocked board also seems to behave in an A4000D (earlier rev) 2MB/16MB, Buster -11/Ramsey 07, with a T-REX II 68060/128M/66MHz CPU card (Int clocked).


    Robert

    I'm definitely seeing the edge connector OVR signal connected to pin 77 of the DPRC, which is identified as _Slave.


    Noted on the clean state machine - but I'm guessing Jeff could do it in his sleep back then.

    I'm recalling the BR trick for the A530 - hence the sidecar unit has a small relay circuit to hold off the A500 CPU start process until the sidecar gets to full power (from it's PS). I know there is then logic that steals the bus away, but the logic also has to let DPRC and the 68030 trade off bus ownership without releasing it back to the 68000 (A500) and I assume same here for the 68020. This card has a software register to fallback/reset into 68020 native, so add that in to the logic mix.

    I think we can rule this card out, and probably it's earlier non-SCSI relative, if OVR won't get handled in the ACA 2000 interpretation. No arguments on the value vs low production.

    As noted earlier, these A1200 cards were designed and shipped late in GVP's existence. That is what I call the ~15 months after my departure from GVP (round 1 layoffs) and before divestiture mid-1994.

    I was aware of the A1208 SCSI/FPU/8MB RAM card - it had just come out. I didn't have any A1200 hardware to test it on, so it went to someone else, then I was let go soon after. Having an A3KT with supped-up GF-040/40 and Bridgecard, and every slot and bay filled, left me no way to justify another Amiga or PC on my work desk.

    I was aware of the first generation A1230, or JAWS I, which was being worked on - it has no SCSI. It's memory is software-mapped >16MB with the DPRC's ROM like the G-Force units. I never saw it before my departure. I see a MACH120 and MACH210 on these boards for the custom logic.

    The Rev 3 is considered the A1230+, or JAWS II. They compressed the logic even further with the MACH130 to fit a clock interface (mine doesn't work) and a header under the corner for the FANG, or the A1291. That is a PCB with a 33C93A and some termination / clock control for that.


    I won't profess to be an engineer (only a tech), but I do have more deeper technical notes on the entire GVP A1200 card series. I can't post any detail in public view, though. I know most of the bus logic is buried in the MACH130 and the MACH210 chips as far as memory and A1200 interfacing. I would not expect Jeff to have done much different than what was done on the A2000 G-Force cards. I'd expect modifying the Amiga data bus to 32-bit, and maybe an adjustment to the Async logic state machine to best interface with 14MHz vs 7MHz. 32-bit memory model on the card is set for 50MHz performance against SIMM32 60ns memory, and that is regardless of the clock provided (some shipped with EC030/40's). I see the high density bus buffers along the card edge, so I assume he has control of bus/buffers through them and the MACH130 logic.

    I assume it's an A2000-like CPU takeover, with the _BOSS interface - that A1200 line is connected to the MACH130. After that, it's just a 14MHz clocked DPRC (not 7MHz as on the G-Force/Combo cards) doing typical HW AutoConfig as a 64K Zorro II (16-bit) I/O PIC through glue logic.

    I have updated the BBoAH hardware page with the jumper details I felt comfortable distilling from the information I have a few weeks ago. It may be additional information you didn't have.

    Thanks for the insight. I agree on the faster memory's impact when being available at an early OS boot point. The Combo/G-Force 030 and 040 all had some form of all-high mapped memory option. Whenever that all large-memory mode >16MB was selected, getting as many of the system structures out of ChipRAM after power on had an impact (Some Z2 16-bit Fast was even a little better to have). Most of the time I suggested users pick the mixed mode (some banks Z2 Auto and some high mapped, if possible). If $00C0.0000 can switch in and out on the A2630/BigRAM+ combo when Z2 DMA is around, that's as good an option to point users to as there is. Software adjustable is always good.


    I recently picked up an ACA1221LC to get a feel for some of the A1200 ACA gear. I saw the planned ACA 2000-interface interface card, so will be curious to try it there when it's available.


    I have a GVP A1230+ JAWS-II 8MB (all high mapped) on the A1200 I have. It has a SCSI option that had a limited run before the mid-1994 divestiture of GVP to GVP-M and others. It's still using the DPRC Z2 DMA chip as a PIC for it's ROM to map memory, and interface to the A1291 SCSI board (I have the SCSI module, too). It supports DMA transfer to it's high-mapped RAM, and presumably ChipRAM. I consider it a rare case to encounter it, though., based on it's short production run period. It will be interesting to see if the upcoming ACA 2000 module works with this one. It appears to be a tangent of the v4 gvpscsi, but I know this card's v5 SCSI driver is not one of Ralph's. It has some slightly different tools, and the ROM is soldered on.


    There was also the A1200 product called FANG which was memory (32-bit AutoConfig), FPU, and DPRC DMA SCSI. It was not a CPU card. I would believe that and other passive-resource cards are not supported on the ACA 2000.


    I will probably pick up one of these BirGAM+ cards at some point. My A2630 for whatever reason wasn't booting the last time I tried (-07 ROMs worked on the A2620, so not them). When I get it going, I am now curious of the memory board.

    Thanks

    So as to not confuse the casual reader that may stumble on this post and my replies, my reference to "the DMA Mask that the driver runs from" is an internal mask value that gets automatically adjusted based on the product the driver detects it is controlling. For example, the Combo/G-Force cards have different amounts of 32-bit memory >16MB that the DPRC can directly access on the cards. It can't be adjusted on gvpscsi.device. It's possible to adjust it on omniscsi.device in the later versions. There is no relationship to any filesystem mask of any similar name.

    Thanks for this info.


    I believe the buffer allocation will happen late enough. My recollection is that the buffer is allocated only at the time when the transfer target falls outside of the DMA 'mask' that the driver works from. If so, that will be late enough to grab the first A2630 memory for it's 16K buffer (yes, a good target) as I assume it's still highest priority with the 24-bit DMA flag set. It would still be prudent to remove the 24-bit DMA capable flag from the 1MB. I will still check with Ralph Babel on the topic of the 16K buffer allocation timing.


    The thing I am not sure of is whether there is any normal DMA transfer masking available for that $00C0.0000 memory when an application has allocated that memory for a standard transfer (to/from). My understanding from Ralph Babel and Jeff Boyer back in the day is that any memory in the 0<16MB address space should always be 24-bit DMA capable. A certain other accelerator developer in the past few years ran afoul of this 30+ year best practice (I was the messenger, I got shot). Soon after, they shifted their 32-bit memory to address ranges outside of the ZII space to avoid the DMA problem (the A590 and GVP A500-HD8 users ran into it, in addition to the A2000 DMA cards).


    Technically, the GuruROM driver has a DMA Mask feature (for testing) that works like a filesystem DMA Mask. The problem is that the way the bits land on the actual physical memory locations. It would prevent DMA to 8MB<16MB (causing buffering to occur), which leaves a few corner cases with the upper part of that ZII FastRAM space. However, this driver-level mask option/solution is not possible for the native drivers on native Zorro II GVP DMA products (v3/v4), the A2091, Microbotics Hardframe, or A2090(a) - the majority of units in the field.


    As I doubt this $00C00000 memory is easily removed at this point, I might suggest a small program/tool option that does an allocate absolute of that specific $00C0.0000 1MB space - let the user set it on if there is any 24-bit DMA controller in the system. It may be the last memory allocated in terms of priority, but is still best to avoid the run off the cliff at some point. I could think of possibly something that might come about from high memory fragmentation and a large allocation. It should also solve the other controllers' potential problem with the loss of 1MB. I was even thinking that 512K of Kickstart (FastROM) might nicely go here, along with optional relocating of the SSP and VBR. Maybe it could be suggested for Thor's 68030.library to handle it that way as a special case?

    FWIW - This is Robert Miranda of (formerly) GVP Tech Support 1989-1993. I know the Series II cards quite well as a result, and I also currently support the GuruROM v6 for Ralph Babel. I am actively part of the OS 3.1.4/3.2 beta test team. I worked heavily with the developers to help eliminate AutoConfig race conditions in the revamped OS 3.1.4/3.2 before each release.

    As I was writing some of this up, the user reported in recent 1:1 conversation that he found the A2630 board set for Unix mode, and it resulted in some very strange Config info seen in the ShowConfig tool. Changing the J304 jumper fixed it,. While that was happening, I also noticed that the GVPInfo board info screen he posted in April showed Product ID 9 for the DPRC I/O PIC. The GVP SCSI driver v4 will operate only against the Product ID 11. It probably ignored the board at boot as a result. I'm not sure what that J304 jumper does to the system's expansion bus logic view.

    As for the question of DMA and memory >16MB, it's time to dispel the myth propagated by shortsighted C= driver engineering of scsi.device on the A2091/A590, and the hack they and uninformed users online have propagated in the past 30 years.

    First: The gvpscsi.device v3/v4 (gvpscsi.device) and GuruROM v6 driver (omniscsi.device) are all related. During the initial testing Ralph Babel and I did, we tested/solved 99% of the quirks and bugs of the 33C93/33C93A -02 and later while being driven at an odd 7.xxx MHz or 14.xxx MHz clock (WD chip specs want something better like 8MHZ, or 10MHz, or higher) - This is something C= never could quite do without the -08 variant of the SCSI chip years later. The -02 chip under this driver is expected to be driven only at 7MHz, and the -04 and higher revisions are expected to be driven at 14MHz unless the card natively doesn't support it, or a jumper setting identifies a non-default clock has been set. Disconnect/Reconnect is not an issue with any of these drivers. Proper termination is still important on any SCSI configuration.

    Second: These drivers are built as a dual-level driver. The one layer drives the 33C93(a) SCSI chip bus logic. The second layer drives the I/O engine (data transfer inside the Amiga). The driver evolved in the following way:

    The v3 driver supports the early Impact (series I) buffered I/O CPU-driven cards and the Series II DMA units (Zorro II through Combo22/33 models). Support was dropped for Series I after v3.15 as code space was needed for future products (G-Force 030/040 variants). I won't cover the nuances here. When in doubt, use v3.15 on the older stuff, and with Async-only SCSI capable devices.

    The v4 driver is functional for all the GVP Series II DMA products and G-Force accelerator product line with a 33C93A SCSI chip. Aside from some corner-case enhancements, and expectation that the SCSI chip is running at 14MHz on all boards that support it, it functions just like the v3. I have documented the subtle differences elsewhere, but 4.13-4.15 are the latest.

    The GuruROM v6 is an enhancement of the v4. It is 1) larger code space than the 16K ROM space provided on both the GVP Series II products and the A2091 - hence the adapter. 2) With 14MHz 33C93A clock enabled, it is capable of Sync SCSI communications, potentially doubling the speeds over the SCSI cable (Amiga bus clock speeds are the next limiting factor the benchmarks run into). 3) it has some finer-grained configuration options the earlier v3/v4 driver could not implement due to the code space limit. 4) It added A2091/A590 DMAC-02 transfer support to the transfer-layer, and the use of 14MHz / Sync mode requires a hack and jumper setting to those C= boards.

    FastROM (gvpscsi.device) v3/v4, and the post-GVP GuruROM v6 (omniscsi.device) are both 24/32-bit DMA address-space aware for the supported GVP Series II & C= A2091 DMA cards in all Amiga machines. This means they natively handle memory targets located above the 24-bit (16MB) address line that would otherwise cause an address-wrap during the transfer by a 24-bit Z2 aware DMA controller.

    When either the older or newer driver identifies that the supported DMA SCSI interface' data's transfer source/target memory address is located above 16MB, the driver seeks out a 16K 24-bit DMA-capable buffer of type MEMF_24BIT in the system memory pool. That's normally the AutoConfig 16-bit FastRAM on the HC8, or other Z2 FastRAM card, or in this system's case, with the A2630, the local 2MB of 32-bit native RAM (which appears to AutoConfig first, and is DMA-capable). That 16K buffer is used to perform a CPU-copy down (or up) with the >16MB target address. For this reason, DMA Mask in a partition/filesystem definition is ALWAYS supposed to be 0xFFFFFFFE for either the v3/v4 or v6 driver. This buffering activity for high-address RAM will result in about 1/2 the potential I/O performance for the most optimal transfers. CPU speed contributes to the efficiency. If ChipRAM must be used for the buffer, the performance impact is slightly higher (Agnus has ownership during heavier graphics needs). Either way, it is still many times more efficient to buffer-copy at the driver level than falling back to PIO, which is simply horrid (think OldFileSysem floppy performance). The driver actually can still do PIO in a rare case, and it's automatic - no jumper setting applies to this.

    Note: All of this described behavior is automatic. The buffering design is much more efficient (and functionally robust) than the C= hack to stuff a reduced-address DMA Mask band-aid into a (higher-level) filesystem partition entry. The native driver on the A2091 of course isn't this smart. The DMA Mask hack is their only way to fix the deficiency on that Zorro II scsi.device driver. For the record: DMA_Mask doesn't fix SCSI_Direct initiated transfers - the application or filesystem using that transfer method would still have to detect the condition on the host and buffer it some way itself. Ugly. Very inefficient at that level of the OS.

    I hope this sheds some light on the topic, and dispels this almost 30-year myth about the GVP (and C=) cards and how 24-bit DMA is handled.

    Also: The status of GuruROM is posted by me on AmiBay under an identical nickname as I have used here.