Skip to content

Install Issue: "no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists" #40

@DavidRNickel

Description

@DavidRNickel

I am currently trying to run pip install for this repo, but I am getting a total of 6 errors which read as follows:

difflogic/cuda/difflogic_kernel.cu(283): error: no suitable conversion function from "const at::DeprecatedTypeProperties" to "c10::ScalarType" exists
      [&] { const auto& the_type = x.type(); constexpr const char* at_dispatch_name = "logic_layer_cuda_forward"; at::ScalarType _st = ::detail::scalar_type(the_type); ; switch (_st) { case at::ScalarType::Double: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Double)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "difflogic/cuda/difflogic_kernel.cu", static_cast<uint32_t>(283), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false.  " "(Could this error message be improved?  If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Double), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Double>; return ([&] { logic_layer_cuda_forward_kernel<scalar_t><<<blocks_per_grid, threads_per_block>>>( x.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>(), a.packed_accessor64<int64_t, 1, torch::RestrictPtrTraits>(), b.packed_accessor64<int64_t, 1, torch::RestrictPtrTraits>(), w.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>(), y.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>() ); })(); } case at::ScalarType::Float: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Float)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "difflogic/cuda/difflogic_kernel.cu", static_cast<uint32_t>(283), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false.  " "(Could this error message be improved?  If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Float), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Float>; return ([&] { logic_layer_cuda_forward_kernel<scalar_t><<<blocks_per_grid, threads_per_block>>>( x.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>(), a.packed_accessor64<int64_t, 1, torch::RestrictPtrTraits>(), b.packed_accessor64<int64_t, 1, torch::RestrictPtrTraits>(), w.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>(), y.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>() ); })(); } case at::ScalarType::Half: { do { if constexpr (!at::should_include_kernel_dtype( at_dispatch_name, at::ScalarType::Half)) { if (!(false)) { ::c10::detail::torchCheckFail( __func__, "difflogic/cuda/difflogic_kernel.cu", static_cast<uint32_t>(283), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false.  " "(Could this error message be improved?  If so, " "please report an enhancement request to PyTorch.)", "dtype '", toString(at::ScalarType::Half), "' not selected for kernel tag ", at_dispatch_name))); }; } } while (0); using scalar_t [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<at::ScalarType::Half>; return ([&] { logic_layer_cuda_forward_kernel<scalar_t><<<blocks_per_grid, threads_per_block>>>( x.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>(), a.packed_accessor64<int64_t, 1, torch::RestrictPtrTraits>(), b.packed_accessor64<int64_t, 1, torch::RestrictPtrTraits>(), w.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>(), y.packed_accessor64<scalar_t, 2, torch::RestrictPtrTraits>() ); })(); } default: if (!(false)) { ::c10::detail::torchCheckFail( __func__, "difflogic/cuda/difflogic_kernel.cu", static_cast<uint32_t>(283), (::c10::detail::torchCheckMsgImpl( "Expected " "false" " to be true, but got false.  " "(Could this error message be improved?  If so, " "please report an enhancement request to PyTorch.)", '"', at_dispatch_name, "\" not implemented for '", toString(_st), "'"))); }; } }()

I'm wondering what a potential solution to this might be? I am unfamiliar with CPP/CUDA code, so my ability to read the stack trace here is quite limited. Is this an issue with version-control, or something upstream at the repository?

I have encountered these same errors on two different machines (one Ubuntu WSL2, the other Rocky Linux).

Machine specs:

  • WSL: Cuda 12.9, Python 3.13.3, Torch 2.7.0+cu128
  • Rocky: Cuda 12.7, Python 3.13.3, Torch 2.6.0+cu126

On the Rocky machine, I have to run python setup.py build because of an issue with pip not finding torch on Line 2 of setup.py. The output is roughly the same as the WSL machine running pip install difflogic or pip install -e . There are also some warnings thrown.

Thanks for any help you can afford me.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions