'How to calculate the semantics similarity with wup_similarity between each pair of elements of list?

I have a list of strings and I want to create a matrix vide after that I want calculate the semantics similarity between each pair by using wup_similarity of string and fill the matrix with the measures of semantics similarity.

list=['empty', 'new', 'recent', 'warm', 'full', 'mixed', 'late', 
      'little', 'tentative', 'half', 'entree', 'tagliatelle', 
      'bolognese', 'asparagus', 'good', 'secondo', 'special', 'garlic', 
      'romanesco', 'bread', 'capable', 'comped', 
 'tasty', 'huge', 'hungry', 'crowd', 'former', 'able', 'Easy']
print(list)
import numpy as np
from nltk.corpus import wordnet
x = np.zeros((len(list), len(list)))
 
# print the matrix
print('The matrix is : \n', x)

for i in range(len(list)-1):  
    syn_sets = [wordnet.synsets(i) for i in list]
    #syn1 = wordnet.synsets(list[i])
    for j in range(len(list)-1):
        syn_sets1 = [wordnet.synsets(j) for j in list]
       # syn2 = wordnet.synsets(list[j])
        x[i,j]= syn_sets.wup_similarity(syn_sets1)
        
print(x[i,j])

error :

AttributeError: 'list' object has no attribute 'wup_similarity'


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source