'Error in nchar(Terms(x), type = "chars") : invalid multibyte string, element 204, when inspecting document term matrix

Here is the source code that I have used:

 MyData <- Corpus(DirSource("F:/Data/CSV/Data"),readerControl = list(reader=readPlain,language="cn"))
    SegmentedData <- lapply(MyData, function(x) unlist(segmentCN(x)))
    temp <- Corpus(DataframeSource(SegmentedData), readerControl = list(reader=readPlain, language="cn"))

Preprocessing Data

temp <- tm_map(temp, removePunctuation)
temp <- tm_map(temp,removeNumbers)
removeURL <- function(x)gsub("http[[:alnum:]]*"," ",x)
temp <- tm_map(temp, removeURL)
temp <- tm_map(temp,stripWhitespace)
dtmxi <- DocumentTermMatrix(temp)
dtmxi <- removeSparseTerms(dtmxi,0.83)

**inspect(t(dtmxi))** ---This is where I get the error


Solution 1:[1]

I believe there are some Chinese characters in your file. To overcome this issue, use this line of code to read them as well:

Sys.setlocale('LC_ALL','C')

Solution 2:[2]

My RStudio restarts the session after to set Sys.setlocale( 'LC_ALL','C' ) and run the TermDocumentMatrix( mycorpus ) function.

Solution 3:[3]

u can use this code: txt <- tm_map (txt, content_transformer(stemDocument)) txt is ur text vector.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Habib Karbasian
Solution 2 Rafael Lima
Solution 3 Unix Soleimani