'The GPU-Util remain 100% after running my PyCUDA code
I'm new to CUDA/PyCUDA, so this question might be a little stupid.
I'm basically using pycuda to define the (gaussian)hypergeometric function 2F1(a,b,c,z) . The original version of this code I found online was based on C++[hypf in C++], and then I convert it into pycuda's "ElementwiseKernel". When I only code one function(I need three hypfs in total, with different {a,b,c}), everything works well, the GPU-Util remain 0 after running the code:
import pycuda
import pycuda.driver as drv
import pycuda.autoinit
from pycuda import gpuarray
from pycuda.elementwise import ElementwiseKernel
import numpy
from numpy import double
z=double(np.random.rand(50))
z_gpu = gpuarray.to_gpu(z)
h = gpuarray.to_gpu(double(np.zeros(50)))
hypf = ElementwiseKernel("double *x,double *h",
"""const double T = 1.0*pow(10.,-10.);
double a = 0.5;
double b = 0.3;
double c = 0.6;
double term = a * b * x[i] / c;
double value = 1.0 + term;
int n = 1;
while ( abs(term) > T )
{
a++, b++, c++, n++;
term *= a * b * x[i] / c / n;
value += term;
};
h[i] = value;
""","hypf")
hypf(z_gpu,h)
Then I have the right results(h[i] as the function values), but when I code three hypf into one "ElementwiseKernel":
hypfs = ElementwiseKernel("double *x",
"""const double T = 1.0*pow(10.,-10.);
double *h1;
double *h2;
double *h3;
double a1 = -5./6;
double b1 = 1./2;
double c1 = 1./6;
double a2 = -1./2;
double b2 = -1./3;
double c2 = 2./3;
double a3 = -1./3;
double b3 = 1./2;
double c3 = 2./3;
double term1 = a1 * b1 * x[i] / c1;
double value1 = 1.0 + term1;
double term2 = a2 * b2 * x[i] / c2;
double value2 = 1.0 + term2;
double term3 = a3 * b3 * x[i] / c3;
double value3 = 1.0 + term3;
int n1 = 1;
int n2 = 1;
int n3 = 1;
while ( abs(term1) > T )
{
a1++, b1++, c1++, n1++;
term2 *= a1 * b1 * x[i] / c1 / n1;
value1 += term1;
};
h1[i] = value1;
while ( abs(term2) > T )
{
a2++, b2++, c2++, n2++;
term2 *= a2 * b2 * x[i] / c2 / n2;
value2 += term2;
};
h2[i] = value2;
while ( abs(term3) > T )
{
a3++, b3++, c3++, n3++;
term3 *= a3 * b3 * x[i] / c3 / n3;
value3 += term3;
};
h3[i] = value3;
""","hypf")
hypfs(z_gpu,h1,h2,h3)
Still, h1,h2,h3 are the right results, and my python(Spyder) tell me that the running was already completed.
But the GPU-Util remain 100% even I shut down my whole python progress, and I can't find the progress while I use the "nvidia-smi" command, it only tells me that the GPU-Util=100%, but no progress running.
Only when I use "sudo fuser -v /dev/nvidia*" command, then I can find the progress and then Kill it.
Is the code above was wrong? or something else that cause this GPU-Util problem?
Solution 1:[1]
Do you mean, you want to trigger an event if 2 images are visible to the camera? If I get your point correctly... Then this might help you with the issue:
https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBecameVisible.html https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBecameInvisible.html
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Shahil Saha |
