Posts by robinsonb5

Caution: Non registered users only see threads and messages in the currently selected language, which is determined by their browser. Please create an account and log in to see all content by default. This is a limitation of the forum software.


Also users that are not logged in can not create new threads. This is a, unfortunately needed, counter measure against spam. Please create an account and log in to start new threads.

Don't Panic. Please wash hands.

    I don't know for sure. I think some of the MiSTer cables have extra resistors to make sure the sync pulses aren't at anything approaching TTL level. I believe - though I'm not sure - that the CM8833 actually requires TTL sync, though.

    For Minimig it will be saved as part of the config file. Likewise with the Atari ST core. For the other cores it has to be done every boot - they don't have config files yet.

    For the Minimig core, hold down an F key at bootup to force the screenmode:

    F1 for scandoubled NTSC,

    F2 for scandoubled PAL

    F3 for 15KHz NTSC

    F4 for 15KHz PAL


    For the other cores (Atari, C16, NES, Megadrive, TurboGfx16, Master System) - hold down the menu button for a second or so to toggle the scandoubler on / off.

    You can buy that cable ready-made, too - I guess it would be bad form to link it here, but a bit of judicious googling would no doubt find it.


    Most of the cores I've ported have the ability to output 15KHz video, but they default to scandoubled VGA mode. The Minimig core can save the scandoubler on-off setting as part of its configuration, but the others can't yet.

    I don't think a CM8833 would appreciate being fed with a 31KHz signal.

    With the first versions of the Atari ST core, I didn't had this bug. It didn't matter which TOS version or memory size was used.

    Can you reproduce this bug?

    Thanks, I haven't tested the STE Lotus remake yet - or the Metal Slug stage 1 - I want to try them both out. I can't think of any core changes that could cause this problem to appear now if it wasn't there before, but I'll see if I can reproduce it.

    (Note that TOS won't notice that the amount of memory has changed unless you do a hard reset.)

    OK, if the pins are really tied together without individual load resistors, you will need such a 3D lookup table to make things simple (if you can call that simple - even if you're only making that 6 bits deep, you'll end up with a 256k lookup table, and 8 bits would make it a whopping 16MBytes). However, I believe the individual load resistors and the "only about 20k resistor" between the individual channels (mind there's two in the path from one channel to the other) make the mixing much simpler:

    Indeed - an unmodified machine does indeed tie the three outputs together, feeding into a single load resistor, hence the lookup table. The channels are 5-bit, but the core's LUT is currently 4x4x4 bits, so it's actually discarding one bit from each channel.

    On real hardware the Stereo modification adds the individual load resistors and the 10k mixing resistors, which as you say is actually a simpler arrangement to model.

    Since it's meant for sigma-delta, it's a simple RC filter with 2k2 series resistor and 4.7nF capacitor to GND. This puts the 63%-level to roughly 100kHz. I wouldn't calculate with any more significant digits, as component stray is probably larger than that. Not perfect, as it will still produce measurable phase distortion in audible frequencies, but it'll be forgiving if you need to lower your delta-sigma frequency.

    Thanks - the reason I was asking is that software emulators use a split filter with two different cutoff frequencies, to model the YM2149's output (different characteristics depending on whether the output is pulling high or low). This would actually be relatively easy to add to the core, but I wanted to be sure I wouldn't need to take any built-in filtering into account. (I also need to figure out how much my PC's line-in is affecting the sound from the Chameleon, before attempting to compare it with a software emulator.)

    Left=(66* Channel A + 33*Channel B)/100

    Right=(66* Channel C + 33* Channel B)/100

    Well that certainly makes more sense than the current situation. Of course, nothing's that simple in practice, is it?


    In the stock configuration the three outputs are directly connected, and thus load each other. They also have different characteristics depending on whether they're pulling the output up or down - so the core currently employs a 3D lookup table based on measurements of a real machine when mixing the channels.


    It would be easy enough, I guess, to halve the level of channel B before mixing - but I'm sure the stereo mod would invalidate those measurements anyway, so in stereo mode maybe it makes more sense to bypass the lookup and use a simple mathematical mixing?


    I meant to ask, by the way, what's the approximate low-pass cutoff frequency of the Chameleon's audio outputs?

    If you point me to a schematic, I could give an educated guess about the loudness relation.

    Thanks - see the ascii-art in the first post of this thread: https://www.atari-forum.com/viewtopic.php?t=1627


    I *think* it'll put a 50/50 mix of channels 1 and 2 on one output, and a 50/50 mix of channel 2 and 3 on the other output - which results in channel 2 being over-represented? That's certainly consistent with the core's current stereo mode.


    There's also another version linked later in the thread which has some pots for adjusting the channel mixing - I suspect they were added to address this same issue.


    I apologize for my error message because it is incorrect. After copying the backup copy of my hardfile to my SD card, hardfile 0 is loaded correctly. My previous hard disk file was probably faulty. I should have checked that beforehand .... sorry.

    No problem at all - your testing and bug reports are valuable and much appreciated, even if this one turned out to be a false alarm. :)

    The hard disk file is no longer loaded, although everything is displayed correctly

    in the configuration (F12)... hmmm ... am I the only one with that error?

    Have you changed anything in the meantime?


    P.S .: Have you been able to make progress with the sound problem?

    That's very strange - I tested it before release and it worked OK for me - and I've just tested it again, switching between configs which have different hard drive images in different directories. The are some minor framework changes from the C16 core and a build optimisation which shrinks the firmware a bit - but otherwise no changes which should affect hard disk images.


    I'll give some thought to how best to debug this.


    As for the sound issue, I've tinkered a little with some filtering - but I think the Chameleon's audio output already does enough that if I do more it's just going to sound muffled. Compared with a software emulator, mono mode is no more noisy than the emulator. Stereo mode isn't supposed to work with digitised samples - if you do the stereo mod on a real ST you'll get exactly the same kind of noise, so I don't think this is actually an error.


    What might be an error is that the middle channel from the YM2149 is twice as loud as the other two - I haven't figured out yet whether that would be the case if you did the stereo mod on a real ST.

    Good work Alastair, joysticks work now with standalone TC64v1 :)

    Thanks, glad to hear it!

    Sounds interesting. If you're thinking of selling some yourself, perhaps it's a good idea to sell the connectors separatly. Someone with a C16 keyboard won't need a more rarer Plus4/C116 connector.

    Or just supply them with the second connector unpopulated, as per the one I have here?


    Jens I noticed that the pin header is installed at a slight angle - I presume that's because otherwise the connector would collide with the MIDI ports on the V2 Docking Station?

    :D I like it!

    See also "Beware of bugs in this code: I have only proved it to be correct - not tried it."


    Anyhow, I've re-released the core with functional joysticks and a couple of other firmware tweaks. (Hopefully I haven't broken anything else in the process!)

    http://retroramblings.net/?page_id=1612


    Have fun!

    Hi, tested briefly, and found out that joysticks aren't recognized at least when connected to docking station. Joysticks work in the OSD-menu, but not in the games. I tested with TC64v1 standalone.

    Thanks for testing. There's always one little detail which slips through, isn't there?!

    I know what's causing this, and it will be an easy fix.


    (The guest cores I'm porting from MiST use one of two different protocols for receiving joystick events from the controller. Originally my framework only supported the newer protocol, which is what the C16 core uses. I now support both - but I'm using the wrong one for the 16 core!)

    I suggest to use at least 50MHz sample rate for the Delta-Sigma DAC, otherwise it'll sound weird. You probably have a 100MHz-ish clock domain for SD-Ram, so that may be a good one. The FPGA is definitely fast enough for Delta-Sigma at SD-Ram speed.

    The DAC currently runs on a 32MHz clock, but it's 2nd-order so it should be OK running slower than that. There's a tradeoff between running a sigma delta fast enough to perform well at low volume and running it so fast that imbalances between high and low pulses start to matter. I usually avoid the latter problem by using a hybrid approach - a sigma delta with a 5-bit rather than a 1-bit output, feeding a PWM. It performs measurably better on real hardware than any other DAC I've tested so far, even though it's not as good in simulation - but until recently I had absolutely no idea why! Now I believe the reason is that (unless it's saturated) the hybrid DAC produces a fixed number of rising and falling edges in any given time period, so any imbalance between them manifests as a DC offset rather than extra harmonics (for a 1st-order) or broad spectrum noise (2nd or higher order). (Here's a comparison of four DACs playing a ProTracker module at 1/64 the usual volume: https://www.youtube.com/watch?v=n3DxYie6AQo )


    The Atari Core was configured as STE with TOS 2.06US and stereo sound. The American TOS version has the advantage that the white borders are significantly smaller than with the German version.


    After switching from stereo to mono, the noise had become significantly quieter, but was still audible. What could be the reason?

    I don't know a great deal about the Atari ST, but as I understand it, the Stereo mode imitates a non-standard modification which some people performed on their STs, which places the three YM2149 channels to the left, centre and right of the stereo image. Digital playback is done under the assumption that the three channels are wired together - like they are on a standard ST - so even on real hardware the stereo mod messes up sample playback.


    (Note, this only affects software which uses the YM chip for sample playback, and not the STE digital audio outputs.)


    In the core, I think the stereo mode feeds the middle YM channel to both stereo channels with no attenuation, so that channel will be louder than the other two (which are each only fed to one stereo channel) - I need to figure out whether the same's true of the real hardware mod.


    With stereo mode disabled, I think the noise level's consistent with mixing sounds at relatively low frequencies - but it might be possible to add a some output filtering to mimic the sound of a real ST more closely.


    Switching the core to use a better audio DAC might help, too - the one it uses at the moment will be quite noisy in its current configuration.

    Especially looking forward to D64 support. Is there or will there be support for a real disk drive eventually? I am thinking of buying the relevant cable here which adds that functionality. This would be very useful as I own a few Plus/4 disks.

    In theory that shouldn't be too difficult to add. I was trying to figure out how to test it, given that I don't have a C64 disk drive - but then I remembered reading in the manual that the Chameleon can be used as a standalone drive emulator: since I have both a V1 and V2 Chameleon, in theory I can use one as a diskdrive while testing the core on the other...


    This core has so much potential as it's the only FPGA solution I know of for this system. The 264 series are becoming more and more rarer and expensive now due to the chips having shorter lifespans, so this core is particularly important for future preservation. So far I'm very impressed 👌

    I can't take any credit for the core itself - I merely ported to the TC64, but I agree it's a great core and I certainly have much more appreciation for the 264 series having ported and experimented with it.

    Yes the harddiskfile is in the folder HDD.

    STEHDD.cfg works correct if the harddiskfile is in the root folder.

    Great, that's what I thought - it's a variant of the same bug which was screwing up writes to HDFs. Could you try this version, please, and let me know if it fixes the problem?

    (The zip contains a build for both V1 and V2 hardware.)

    Files

    • configtest.zip

      (678.78 kB, downloaded 213 times, last: )

    After I was able to start a Hardfile 0 very well manually, the configuration displayed "Hard Disk: Unit 0".

    At this point in time the STEHDD.cfg was saved. After restarting and loading the STEHDD.cfg
    Hardfile 0 was no longer loaded. "Hard Disk: NONE" is now displayed in the configuration.

    In the core from October 2nd, 2021 there was "Hard Disk: Unit 1" displayed

    Thanks for testing. If it says "Hard Disk: NONE" it means either the core failed to find the hardfile, or wasn't able to validate the directory it's in. It's in a subdirectory, yes?