'Simplifying Words' Network in order to calculate betweenness centrality
I'm working on text mining and network analysis project and I can't calculate betweenness_centrality on my network because it has more than 15000 nodes and 275000 archs.
I'd ask you if is there a way to simplify the network considering that mine is a network of words. I thought to group the synonyms in a single node, maybe through WordNet (https://wordnet.princeton.edu/) but this library provides me more items (synonyms ) for each single word and would complicate the network.
Have you some advise for me to simplify the network and get betweenness_centrality measure with good approximation and in a reasonable time?
I'm working with Python and networkx library.
Thank you guys
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|
