'How to justify the choice of hyperparameters in embedding and clustering algorithms?

I am currently working on my thesis and focus on graph embedding via node2vec and applying different clustering methods. Therefore, I am trying to compare methods like H-/ DBSCAN, Spectral Clustering and kmeans.

Now I am aware of evaluation methods like the elbow method, Silhouette Score or the Gap statistic method. But apart from those, and as they seem to indicate different conclusions: is there any scientifically sound method to justify the choice of hyperparameters in the embedding and clustering methods? Apart from the visualization looking pleasing, are you aware of anything?

I couldn't find any tuning in one of the papers or blog entries, except some "rather higher", "try >1" etc.

Any help is greatly appreciated :)



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source