'ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Importing from pyxdameraulevenshtein gives the following error, I have

pyxdameraulevenshtein==1.5.3, 
pandas==1.1.4 and 
scikit-learn==0.20.2. 
Numpy is 1.16.1. 
Works well in Python3.6, Issue in Python3.7.

Has anyone been facing similar issues with Python3.7 (3.7.9), docker image - python:3.7-buster

__init__.pxd:242: in init pyxdameraulevenshtein
    ???
E   ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject


Solution 1:[1]

I'm in Python 3.8.5. It sounds too simple to be real, but I had this same issue and all I did was reinstall numpy. Gone.

pip install --upgrade numpy

or

pip uninstall numpy
pip install numpy

Solution 2:[2]

try with numpy==1.20.0 this worked here, even though other circumstances are different (python3.8 on alpine 3.12).

Solution 3:[3]

Indeed, (building and) installing with numpy>=1.20.0 should work, as pointed out e.g. by this answer below. However, I thought some background might be interesting -- and provide also alternative solutions.

There was a change in the C API in numpy 1.20.0. In some cases, pip seems to download the latest version of numpy for the build stage, but then the program is run with the installed version of numpy. If the build version used in <1.20, but the installed version is =>1.20, this will lead to an error.

(The other way around it should not matter, because of backwards compatibility. But if one uses an installed version numpy<1.20, they did not anticipate the upcoming change.)

This leads to several possible ways to solve the problem:

  • upgrade (the build version) to numpy>=1.20.0
  • use minmum supported numpy version in pyproject.toml (oldest-supported-numpy)
  • install with --no-binary
  • install with --no-build-isolation

For a more detailed discussion of potential solutions, see https://github.com/scikit-learn-contrib/hdbscan/issues/457#issuecomment-773671043.

Solution 4:[4]

Solution without upgrading numpy

While upgrading the numpy version would often solve the issue, it's not always viable. Good example is the case when you're using tensorflow==2.6.0 which isn't compatible with the newest numpy version (it requires ~=1.19.2).

As already mentioned in FZeiser's answer, there was a change in numpys C API in version 1.20.0. There are packages that rely on this C API when they are being built, e.g. pycocotools. Given that pips dependency resolver doesn't guarantee any order for installing the packages, the following might happen:

  1. pip figures out that it needs to install numpy and it chooses the latest version, 1.21.2 as of the time writing this answer.
  2. It then builds a package that depends on numpy and its C API, e.g. pycocotools. This package is now compatible with numpy 1.21.2 C API.
  3. At a later point pip needs to install a package that has a requirement for an older version of numpy, e.g. tensorflow==2.6.0 which would try to install numpy==1.19.5. As a result, numpy==1.21.2 is uninstalled and the older version is installed.
  4. When running code that uses pycocotools, its current installation relies on the updated numpy C API, yet the numpy version was downgraded which would result in the error.

Solution

You should rebuild the package with the outdated numpy C API usage, e.g. for pycocotools:

pip uninstall pycocotools
pip install pycocotools --no-binary pycocotools

Solution 5:[5]

I had this issue when using the tensorflow object api. Tensorflow is currently NOT compatible with numpy==1.20 (although this issue is not apparent until later). In my case, the issue was caused by pycocotools. I fixed by installing an older version.

pip install pycocotools==2.0.0

Solution 6:[6]

Upgrading python version to numpy ==1.21.1 worked for me!

Solution 7:[7]

what worked for me was:

pip uninstall numpy
conda install -y -c conda-forge numpy

as bizzare as it might sound...I didn't even have to uninstall it with conda which seemed odd to me. I am using python 3.9

Solution 8:[8]

For anyone using Poetry it is necessary to have experimental.new-installer set to true for an application with a numpy<1.20 dependency to be built correctly i.e:

poetry config experimental.new-installer true

It is true by default but if (as was the case for me) it has been changed it can catch you out.

My application uses Tensorflow and I did not therefore have the option of upgrading to >1.20. Poetry also does not support --no-binary dependencies.

Solution 9:[9]

numpy version 1.22 solved it for me.

Solution 10:[10]

After you pip install any package, makes sure you restart the Kernel and should work. usually packages get upgraded automatically and all you need is a quick restart. At least, this what worked in my situation and I was getting the same error when I tried to install and use pomegranate.

Solution 11:[11]

I was facing same issue in raspberry pi 3. Actually the error is with pandas. Although tensorflow need numpy~=1.19.2 , but pandas is not compliable with it. So, I have upgraded (because downgrading is not) my numpy to latest version and all works fine!!!!.

root@raspberrypi:/home/pi# python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np 
>>> np.__version__
'1.21.5'
>>> import pandas as pd
>>> pd.__version__
'1.3.5'
>>> import tensorflow as tf
>>> tf.__version__
'2.4.0'
>>> tf.keras.__version__
'2.4.0'
>>> tf.keras.layers
<module 'tensorflow.keras.layers' from '/usr/local/lib/python3.7/dist-packages/tensorflow/keras/layers/__init__.py'>

Same issue here also - https://github.com/bitsy-ai/tensorflow-arm-bin/issues/5

Tensorflow source: https://github.com/bitsy-ai/tensorflow-arm-bin

Solution 12:[12]

Upgrade numpy version:

pip install -U numpy

Solution 13:[13]

Use python virtual environments and install gensim using :

pip install gensim==3.8.3

Solution 14:[14]

This worked for me (when nothing else on this page did):

# Create environment with conda or venv.
# Do *not* install any other packages here.
pip install numpy=1.21.5
# Install all other packages here.
# This works as a package may build against the currently installed version of numpy.

This solved a particularly brutal issue that was unresolvable by all other answers on this page as of 2022-04-11:

Other answers try to fix the problem after it occurred, this fixes the problem before it occurs.

In addition, experiment with different versions of Python, e.g. 3.8, 3.9, 3.10.

Reference: Excellent answer by @FZeiser that explains why this works.

Solution 15:[15]

I encountered with the same problem with python3.10.4,numpy1.21.5, I solved it only after I updated numpy to 1.22.3 via pip uninstall numpy and pip install numpy. Only pip install --upgrade numpy didn't work.

PS D:\quant\vnpy-master\examples\veighna_trader> python .\run.py Traceback (most recent call last): File "D:\quant\vnpy-master\examples\veighna_trader\run.py", line 31, in from vnpy_optionmaster import OptionMasterApp File "D:\it_soft\python3.10.4\Lib\site-packages\vnpy_optionmaster__init__.py", line 26, in from .engine import OptionEngine, APP_NAME File "D:\it_soft\python3.10.4\Lib\site-packages\vnpy_optionmaster\engine.py", line 34, in from .pricing import binomial_tree_cython as binomial_tree File "binomial_tree_cython.pyx", line 1, in init binomial_tree_cython ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject

Solution 16:[16]

For almost the same image : python:3.7-slim-buster

I started to have this problem just today, it was non exitent before.

I solved it by removing numpy from requirement.txt file and doing instead the following in my Dockerfile:

RUN pip3 install --upgrade  --no-binary numpy==1.18.1 numpy==1.18.1 \
&& pip3 install -r requirements.txt 

I use some old versions of keras and its librairies and updrading to numpy 1.20.0 didn't work for those librairies. But I think the solution consist in the first command that I gave you wich tell pip to try to not compile numpy and download a pre-compiled version.

The trick in the command is that you might find people telling you to use --no-binary option of pip to solve the problem, but they don't specify how and it might be tricky (as it happened to me); you have to write the package two times in the command in order for it to work or else pip will throw you an error.

I think the --upgrade option in the first command isn't necessary.

Solution 17:[17]

Install older version of gensim, It works!

pip install gensim==3.5.0

or

conda install gensim==3.5.0