You don't really need the strap. I've never bothered, I work with electrosensitive components on a pretty frequent basis. All you've got to do is ground yourself before hand, and not that hard to just touch the ground terminal on the oscilloscope before fucking with components.
In your case just go touch a faucet or a metal bit on a lamp or something. Just do you work off of carpet on a non conductive surface (so like a wood table) and without wearing a wool jumper or something so you're not building up a static charge while you work. Try to grab parts by things like the heatsinks when moving them around as much as possible. A block of metal will care a lot less about a static shock than the components on the PCB.
If you want to be really paranoid/fancy, stick the power supply on your workbench, plug it into an outlet but keep it turned off while you work. Then just tap it every few minutes. This also gives you an easy to access place to ground yourself if you end up moving around which is a bit more convenient than remembering to tap a lamp again before sitting down.
That said, while not needed, it does still reduce the risk of damage due to ESD, since (provided the other end is properly grounded), you no longer need to think about it. So if you're looking at 900 GBP components and feeling nervous, yea go for it. They should cost you like 5 quid? If you do grab one, do the thing with the power supply, and then just clip the non wrist strap end to it.
If you want an anti static platform to breadboard on, get one if you want, but your motherboard box works pretty good too. .
Every SSD I ever used has bottlenecked when writing to it. I don't have that issue with raided HDD. For my uses, I get the best performance from my setup and that is proven through actual application. That being said it isn't a one size fits all so what works best for me is simply what's best for me.
Write time is not really all that important, nor really is read time. Access time is the point of an SSD, which is why ti causes such dramatic improvements.
If you're getting notable better performance out of a raid array vs and SSD, your running on a system without adequate memory, using a program that makes terrible use of available memory, or have a fairly niche application where you're frequently reading from and writing to many large files. For the home user you're better off installing more RAM into your system, for the cost of a decent RAID 0 setup you can stuff 20 to 30 gigs of DDR4.
Infact with modern HDDs RAID doesn't even provide that big a performance boost (Per dollar) over single HDDs anymore. Larger single drives have *way* better data density and the OS is better able to set the system up for sequential read/writes. two 7200RPM 500Gb HDDS in raid 0 will only provide a moderate speed improvement over a single 5200 RPM 4TB HDD, but it'll cost about same amount of money. You need 3 or 4 of them for it to be that big a deal. And once you're up to 3-4 drives in raid 0 your just asking to lose everything, so now you need more HDDs to run an incremental backup and/or you need even more drives to set up a RAID 10 or etc. And at that point you're now up to the cost of a 1Tb to 2Tb SSD. Might as well just got one of those, since they'll be vastly more reliable and have similar write speeds
And again, this assumes it's even an application where RAID is even useful; Most of where RAID beat out single HDDs in the past was not read/write time but instead access time, and if that's important SSDs are the clear winners. Games no longer see a noticeable performance improvement in RAID, with the exception of some loading screen times, simply because they've already preloaded the needed assets into RAM.
For samus's expected uses, there's no justification for the added cost and headache that comes with using RAID. A modern use case for RAID is something like a video production company that needs a server to handle storage and backup of video files.