I have had this nuc working fine for years now all of a sudden i cant get windows to display anything on my lcd that only has dvi. I use a dvi converter and an hdmi cable. I've ruled out converters, ruled out cables, even ruled out screens. I took the nuc plugged into an hdmi only lcd and in...
update: that didnt seem to work, it did initially but every time a longer duration sleep cycle occurs, it fails with the same error, back to the drawing board
Well it was probably generic superspeed hub, but there are 9 of them, so that made it a little hard,
instead..
i was able to use usbdeview and determine which single device under human interface area was the crystal (and also renamed it using friendlyname registry hack), i went to the power...
I can now confirm that at least as long as the usb from the pimax crystal hub is plugged into the pc (win 11), i get this bsod, forced restart on resuming from sleep.
I've tried multiple ports and an addon card as well, all do the same thing. I also tried disabling hybrid sleep.
MB: MSI z690-a...
Yeah that might be my next recourse (bios updated already)
I bumped voltage but this time the stop code occurred after trying to sleep: 0x0000001e
I guess more bumping is at hand i think i have it at 1.265 now instead of 1.255
I already had the power limit on 241 watts though, not the watercooler 4096 it tries for, i figure i'm probably not far off at the old 1.255, ill just go up a tad and when i dont get a failed power save im good, however i can get 2 or 3 in a row no issues.. and other stress tests were perfect...
I've opted to go auto on volts + adaptive for a short stint, so far no issues, however temps are 100c during msfs vs 87C with 1.255 + adaptive, so i guess next ill just bump up from 1.255 until stable. You would think running stock clocks would be easy and not have to play the volts game...
Lately i seem to be getting a lot of issues with the computer on pressing the power button to resume from sleep with my setup.
I dont overclock, i have most turbo settings if not all on auto, i have the cpu volts fixed however at 1.255 llc6 adaptive setting, the ddr is xmp and auto, which comes...
From what i can see, or remember from past experiences, for this auto should be enough and xmp. So far since disabling xmp (left on 5600, 1.4v dram) i think it "may" be more stable, but i havent stressed it yet. I think perhaps this ram choice was bad.. i may try these for full xmp:
Corsair...
i cant seem to find any recommended bios settings with this configuration. I've been plagued with bsod with both manual voltage on cpu/ram and auto.
I was originally thinking with non k, auto cpu volts was fine, i've also tried manual and auto dram volts, currently on auto cpu and dram at 1.4...
Somewhat older topic but for those with the aero..
Have you noticed a background of "dust particles" banded i think horizontally, very faint, on bright white screens? I dont know if this is mura or not.
I'm debating to swap the headset before my 1 year ends soon, but i have a feeling this is...
Well, i decided to do DDU again, went to safe mode, cleared things out, back to win11, tested, screen times out just fine..
Installed the latest nvidia driver.. screen STILL times out just fine.
Rebooted, now its broke again. The issue for me popped up on switching 3080ti to 4090.. so i think...
Well, in an interesting turn of events, i swapped out for a honeycomb yoke and moved my ch to another pc.. in BOTH cases now the pc will not sleep, 21h2 win11 on both machines. Disconnect either yoke from either system and the screen times out just fine.
I'm not using the ch software, though i did try the uncheck power management option, no change.
I also tried disabling the yoke in device manager and for some reason even that wasnt making things work, i had to physically unplug em.
Well, i took the approach of disconnecting things inside the yoke.. i disconnected all but the usb connection, so no pots involved at that point. The issue remained
I still need to give this a shot, i suppose i could just use a 1:1 mixture of water and isop. alochol in a spray bottle couldnt i (i dont have contact cleaner on hand), or maybe theres a way i could disconnect them in the yoke just to prove its that without disconnecting the usb, but i doubt...
I'm not 100% sure this is where the issue lies, but i had a 3080ti previously and with my ch pedals and yoke the screen would time out at the time set, no issues.
I put the new 4090 in, latest nvidia driver (and previous drivers since the 4090 was released tried too), i did clean, i did DDU in...
actually i just stumbled on these:
apparently a known thing
https://www.overclock.net/threads/massive-rtx-4090-problems-driver-or-hardware.1801381/
https://www.nvidia.com/en-us/geforce/forums/geforce-graphics-cards/5/502996/rtx-4090-driver-crashing-constantly-without-any-lo/
Fix : In nvidia...
Anyone ran into this with a 4090. Black screen, recovers and logs to event viewer. I was just using chrome, nothing else.
In my case windows 11 + a 1000 watt evga SuperNova Platinum + 12900k cpu at 4.9ghz and 6400 ram.
I never had this occur with the previous 3080ti.
I ran DDU and the latest...
So I tried updating to the 2020 firmware since its zip wasn’t corrupt (the 2022 supermicro lsi was corrupt).. I ran it then rebooted (did using the gui).. on reboot I got hit with a ton of errors
Maybe these are normal after a firmware update? But then again the ones where it says missing don’t...
Also, in the manager for the controller card in windows i found a ton of "informational" errors from even 3 weeks ago.. not sure where/why they werent in the bios version of the card..
many about unexpected sense PD, corrected medium error during recovery, PD 2, 4 etc
So i decided to reboot to the 2022 drive.. strangely no exclamation mark in windows device manager at all.. all drives showing up.
Makes me really wonder whats going on here, i guess random hardware fails or that refs volume somehow barfing the driver when it was corrupted (doubt it).
Then...
Yeah i never trust them but tried all options, its strange 2017 is the most recent update though. edit: i did find 6.714.18.0 dated 2018, so either way im updating to that and maybe ill try that 2022 drive one more time to see if it cures 2022 (otherwise 2022 isnt compatible).
Changing for the...
So another update. Im really not sure what the situation was exactly.
I moved the card from one slot to another, then moved the drives in the chassis around, suddenly I could boot to the original carved 2019 OS. But the E drive which was a refs drive, was corrupt and “raw”, the backup data 43TB...
I tried to update the driver manually, that would hang sometimes, even picking the 2017 driver (which is basically the same driver) i had access to.. the one only line only is 2017 as well. Doing windows update option on the driver says the best is already installed, in that situation.
Well originally i carved it out, due to the speed of a raid6 read rate on 8 drives vs just a single sas 12 gbps for the OS, hindsight now, i think id rather just plop a pciex4 adapter card and go nvme for the OS going forward.
I didnt have a spare lsi 3108 laying around though now i'm going to...
Those are some good ideas to try at least.. on the backup, well technically the data in the partition that wont load is the backed up data (DPM data) from every server, so losing that sucks (40tb) but it can redo that over time if i get this working
But a slight update, I installed windows...
Slight progress.. during shift f10 to see a command prompt, i can in fact see the C and D drives that are carved out of the raid 6 array.. but when i try to get to the refs E: drive it just sits at the prompt, much like it gets stuck when trying diskpart. I'm pretty sure the refs volume is...
Yeah i put in a request with my cdw rep and chatting with a few at dell now, though i fear those solutions will be pretty pricey. Really would need more time to research possibly, but time isnt on our side. I mean our old server gets by with dual gigabit ethernet, but there are probably better...
If by log you mean in the lsi megaraid bios area, im not seeing any records for some reason.
The drives are exos 8tb seagate enterprise drives in this case (ST8000NM001A)
So im midstream on a hard drive/server issue with our dpm server here so i've opted to start looking for a system that can ship and be here asap.
The old system was a 16 or more 4u chassis type setup (sas 12 GB/sec) with dual 1GB nic ports.. had a supermicro dual E5-2620 v4 2.10 ghz cpu's and...
I was able to clear the bios, all ok bios wise now.
I checked the smart info, no errors on any drive which is odd.
I would have thought if it were the two drives preventing the system from booting to a usb boot stick or to its windows on that virtual raid6 drive, that when i pulled them it...