Dual laptop display cards issue


#1

Hello, I’ve just started playing with this with a view to getting in on the ground floor of development and 3d asset creation here.
The first thing that happened is that nothing happened. I logged in OK and everything appeared to be loading except that I couldn’t even see my avatar. I had a screen full of stars and that was it.
Thankfully the problem was easy to spot in the logs. I have a pretty recent i7 laptop with two video cards in it. The standard Intel hd4000 you’ll find in 99% of all i7 laptops and an NVIDIA GT840M and unfortunately 2 screens connected.
There was just a whole list of out of resource errors for the HD4000. So forcing the application to use just the NVIDIA solved this for me. However it is a problem that definitely needs redressing as I said to myself ‘What a load of crap’ and nearly gave up on this. Someone who isn’t quite so computer savvy would probably not even bother again and actually say that out loud.
I am a programmer as well as a professional 3d designer but haven’t got around to looking at the code and I’m afraid no expert at video card programming in windows 10 anyway or I would be making a code contribution but if someone wants to pick up on this as an end user usability issue.


#2

Just wondering why you didn’t have the nvidia card as the default?


#3

For normal system GUI stuff etc… There’s no point and it would just take away from the memory used by CUDA in blender when I do an architectural render with an insane number of polys because I’ve been supplied a subdivided sculpt from ZBrush to use in it. I have my reasons.
My big question is that my laptop without the NVIDIA card in it would have had the same configuration as a standard one bought from PC World for a fiver. So why didn’t this work on the intel GPU in the first place. The laptop is just over a year old and has 16GB Ram and a 512GB SSD.

It’s easily repeatable and I’ve just re-forced the error so I could dump this.

[03/10 02:03:45] [DEBUG] Output Buffer capacity in frames: 3 requested bytes: 6144 actual bytes: 6144 os default: 0 period size: 1228
[03/10 02:03:45] [DEBUG] GL Version: “4.1.0 - Build 20.19.15.4331”
[03/10 02:03:45] [DEBUG] GL Shader Language Version: “4.10 - Build 20.19.15.4331”
[03/10 02:03:45] [DEBUG] GL Vendor: “Intel”
[03/10 02:03:45] [DEBUG] GL Renderer: “Intel® HD Graphics 4600”
[03/10 02:03:45] [DEBUG] GL Version: “4.1.0 - Build 20.19.15.4331”
[03/10 02:03:45] [DEBUG] GL Shader Language Version: “4.10 - Build 20.19.15.4331”
[03/10 02:03:45] [DEBUG] GL Vendor: “Intel”
[03/10 02:03:45] [DEBUG] GL Renderer: “Intel® HD Graphics 4600”
[03/10 02:03:45] [DEBUG] Created Display Window.
[03/10 02:03:45] [DEBUG] GL Version: “4.1.0 - Build 20.19.15.4331”
[03/10 02:03:45] [DEBUG] GL Shader Language Version: “4.10 - Build 20.19.15.4331”
[03/10 02:03:45] [DEBUG] GL Vendor: “Intel”
[03/10 02:03:45] [DEBUG] GL Renderer: “Intel® HD Graphics 4600”
[03/10 02:03:45] [DEBUG] Status: Using GLEW 1.10.0
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] V-Sync is ON
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] Initialized Display.
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :
[03/10 02:03:45] [DEBUG] The fragment shader uses varying _alpha, but previous shader does not write to it.
[03/10 02:03:45] [DEBUG] Out of resource error.
[03/10 02:03:45] [DEBUG]
[03/10 02:03:45] [DEBUG] GLShader::compileProgram - failed to LINK the gl program object :

etc…


#4

This is actually super helpful, thanks. We should have a fix for this shortly.


#5

@dazhazit You may wish to try out the build on this PR: https://github.com/highfidelity/hifi/pull/7308


#6

It’s actually a problem we’ve attempted to address. According to the nVidia documentation on Optimus, one should be able to mark an application as preferring the high performance GPU by declaring a DWORD symbol NvOptimusEnablement inside their application, setting it to 1 and exporting it.

In fact we do all this, but for some reason it doesn’t seem to function.

However, inspecting the executable in a dependency walker has given me a bit of an idea as to why this might be. I will need to test, but I might be able to resolve this.


#7

Thank you for the quick response to this. I may start working on the code myself though as I said I’m not paid to do it and am not an expert in GPU stuff as my last programming position was aimed towards writing server code in Node.JS. I never want to go back there again as I ended muttering words like ‘ping latency’ and ‘pubsub’ in my sleep :worried: Hence the recent change of emphasis towards 3D creative work. However looking at the speed of asset loading (or lack thereof) I am kind of tempted to look at the networking side of things on this project and am putting on the rubber gloves in preparation for a Git clone and some code surgery with an algorithm mallet. Anyway this is not a place to put my CV.