'A function that counts the fullness of the finished hash-table C

A function that counts the fullness of the finished hash table. There is an implementation of a hash table and its main functions. You need to write a function that accepts only struct listnode **hashtab void hashtab_fullnes(struct listnode **hashtab) and first counts the number of elements entered in the table.

struct listnode{

    char key[32];
    int value;
    struct listnode *next;
    
};
#include "Hash.h"
#define HASHTAB_SIZE 200003

unsigned int hashtab_hash(char *key)
{
    unsigned int h = 0, hash_mul = 31;

    while (*key)
    {
        h = h * hash_mul + (unsigned int) *key++;
    }

    return h % HASHTAB_SIZE;

}

void hashtab_init(struct listnode **hashtab)
{
    for (int i = 0; i < HASHTAB_SIZE; i++)
        hashtab[i] = NULL;
}

void hashtab_add(struct listnode **hashtab, char *key, int value, int hashtype, int *collisions)
{
    struct listnode *node;
    int index;
    if(hashtype == 1){
        index = hashtab_hash(key);
    } else {
        index = hashtab_Hash(key);
    }

    node = malloc(sizeof(*node));
    if (node != NULL)
    {
        if(hashtab[index] != NULL){
            *collisions += 1;
        }
        strcpy(node->key, key);
        node->next = hashtab[index];
        hashtab[index] = node;
    }
}


Solution 1:[1]

You have an implementation of a hash table with overflow capability (If an entry with a certain hash key already exists, you push the new value into the list at that index). As such, the hash-table can never be really full. The more you add to the table, the more it degrades into a linked list.

To make this obvious, consider you have HASHTAB_SIZE = 1. Then, every value you want to add has (with a respective hash function) the same key. And you end up with a linked list for all intents of purposes. The size of that list is basically only limited by the amount of memory of your system.

So, in order to calculate the "fullness", a quite arbitrary definition is needed (like, e.g. the ratio of used cells (slots with non-empty list) / (HASHTAB_SIZE)). Such a definition also defines, what kind of surprises might happen, e.g. if you only insert a lot of elements which get the same hash-key, your hash table appears to be nearly empty while it consumes a lot of memory (and has a large count of stored elements).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 BitTickler