'Libtorch: How to make a Tensor with GPU pointer?

Below is the pseudo-code of what I want to do. I already know how to move tensor to GPU (.cuda())...
But have no idea about using a GPU pointer to make a new tensor. Is there any method I've missed?
I don't want to copy devPtr back to the host side but just make the GPU tensor with the pointer.

int main(void) {
  float* devPtr;
  cudaMalloc((void**)&devPtr, sizeof(float)*HOSTDATA_SIZE);
  cudaMemcpy(devPtr, hostData, sizeof(float)*HOSTDATA_SIZE, cudaMemcpyHostToDevice);

  torch::Tensor inA = /* make Tensor with devPtr which is already in GPU */;
  torch::Tensor inB = torch::randn({1, 10, 512, 512}).cuda();

  torch::Tensor out = torch::matmul(inA, inB);

  std::cout << out << std::endl;
  return 0;
}


Solution 1:[1]

I think this should work, can you confirm ?

auto dims = torch::IntArrayRef{1, 10, 512, 512};
auto gpu_tensor = torch::from_blob(dev_ptr, dims, torch::TensorOptions().device(torch::kCUDA))

Be careful, torch::from_blob does not take ownership of the pointer.If you need to make gpu_tensor independant of the lifetime of dev_ptr, then you need to clone it.

Solution 2:[2]

You can use .apply here with a lambda function.

df["Course Final Score"] = df["Course Final Score"].apply(lambda x: sum(x)/len(x))

Essentially your applying the function sum(x)/len(x) to every x in your column "Course Final Score", i.e replacing the list with the sum of that list divided by the length of that list which will give you the mean.

EDIT

If your list contains a string of a list, you can use json to decode it as follows:

import json
df["Score"].apply(lambda x: sum(json.loads(x))/len(json.loads(x)))

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2