I have a dual boot system (Windows 10 alongside Ubuntu 20.04 LTS).
At the beginning, I had the same problem with Windows 10 where if I reboot the machine, the SSD won't be detected unless I shut down and turn on the PC. However, after doing a fix suggested to set the driver on Windows 10 to use Standard AHCI, the issue was solved on Windows 10.
Now facing the same problem but on my second OS that is Ubuntu 20.04. If I reboot the machine via Ubuntu, the SSD again won't be detected unless I shut down and turn on the PC again.
I spent several days trying to figure out why this happens on Ubuntu, reinstalled the OS, fixed the boot loader and verified that Ubuntu's boot is working properly in conjuction with Windows 10. I tried also a fix after that on Ubuntu that was meant for nvme based SSDs (obviously it didn't work since 870 QVO is not nvme based).
Just for reference, I'll post here the link for the fix made for the nvme based SSDs: https://askubuntu.com/questions/972200/bios-does-not-detect-ssd-after-reboot-from-ubuntu-16-04-dell-...
I couldn't find anywhere on the web a case where a person had the exact same problem with the exact same SSD model and type that I have. I have already reported a bug to Ubuntu on this but thought of also posting about the same here in the hope that someone from Samsung will actually see this and hopefully make a fix for this problem for Ubuntu.
If anyone has an idea why the SSD cannot be detected when rebooting Ubuntu in specific, I would be very happy. As mentioned above, the same problem took place on Windows 10 but it was fixed by switching to standard AHCI driver settings.
P.S. I have updated both BIOS and SSD's firmware already but the same results with Ubuntu at least. My Ubuntu's configurations are also set to AHCI mode.
Very similar issue here. Ubuntu 20.04 64bit on Raspberry pi. I have two 870 QVOs attached via USB adaptors. Each time the machine reboots they are not detected properly. They work again if I unplug and plug them back in while the Pi is running. They are set up as raid 1, so not mounted directly, and are still there in /dev/* but not showing up with lsusb. I see them get found in the logs, and can't see any explination of why they are not avaliable when the OS finishes booting. I was upgrading from 860 EVO disks which have been working fine, I just need more capacity. Very frustrating as I can't progress until this is resolved.
It's a pain. This was my first SSD from Samsung and facing issues already with this model on Ubuntu. However, can't really blame only Samsung, but it's also the problem with Ubuntu that they didn't yet patch the OS for this kind of SSDs I guess. I opened a bug there as well.
I am not so sure. I went down the path of trying to update the firmare today.
- First, can't run it from ARM so need to move the drives to do it
- Second, Linux way is convoluted
- Windows way didn't work at all, software couldn't update the disks (fresh install of windows)
-Boot ISO just hung
I returned them an hour ago and bought some crucial ones.
For Windows 10, the solution I did was from the following post that worked for me. Btw, you might experience the same problem with other SSDs as well just a heads up:
but if you really love to use Ubuntu, there is no timeline when Ubuntu will fix the problem with the OS. Even with the other nvme based SSDs, you'll have the same problem on Ubuntu. They said that they patched it in recent linux releases for nvme based SSDs but haven't tested that . However, if the new SSD you're getting is nvme based, then it should either work or fixable on Ubuntu since there is already a fix for the problem described in this post for nvme based SSDs.
I hope that Samsung will wake up and update the firmwares to fix all of these problems instead of us having to reprogram the OS to make their SSDs work properly.
This is also an issue for SATA SSD's. And it is not necessarily an Ubuntu problem at boot so much as for shutdown. Grub is not Ubuntu. BIOS does not even see the boot partition? I have have multi boot machines, I have forced UEFI, and forced LEGACY. No difference. If the last OS booted was Windows 10 with the generic AHCI driver, then GRUB boots. Not so if Linux so there must be something in the last gasp that is causing this.
On Windows 10, setting the configuration to Standard AHCI mode will fix the problem in case of SATA SSD. However, the same wouldn't work on Ubuntu. The closest thing I found was a similar issue happening to nvme based SSDs that was caused by the SSD's power management where the SSD power setting causes the SSD to shutdown completely when you perform a warm reboot on Ubuntu. They patched that problem for nvme based SSDs in their latest Ubuntu kernel updates. However, in case of SATA SSDs, there are no updates since it seems like this problem or issue is specific to Samsung but other SSD brands doesn't seem to complain that much about it. There is a bug open for this and seems they're working on it but no idea when this issue will be fixed. Samsung has to patch this issue somehow since other brands do not cause that many issues as far as I know. (I'm not an expert on the matter so I could be wrong).
I am pretty sure BIOS is not seeing it, so it as if the disk is making itself unavailable. I would suspect that it has prematurely reported readiness while an asynchronous write is completing. not having the enterprise capacitors it may only abandon this if power is dropped. So it is still busy even after shutdown thinks it is done. Why would it either lock the boot sector or just go offline in this way? Rolling back "fixes" for data corruption is not advisable. It may have even been a mistake to upgrade firmware. It is only for warm boots from Linux. It is fine on a reboot from BIOS, or Windows, Memtest. Windows did the AHCI driver swap automatically. The crash screen on first boot had a typo so it was a rush job from Microsoft. It may just need a delay. I cannot believe Samsung can't diagnose this with a breakout build in the Lab. It should behave. The Raspberry Pi community are having issues too. I've got a Laptop and would prefer not to run two drives, just to have one that boots. Not to mention this would require out of band access to fix for a remote desktop or server.