'Is there a standard Firebase rule for preventing an authorized user from posting malicious code, abusive words or spam links?
I've looked through the official documentation and the helpful (but slightly dated) "Security Rules! 🔑 | Get to know Cloud Firestore #6" video, but instead of reinventing the wheel, I'm looking to see if there's a standard Firebase rule for preventing an authorized user from posting malicious code, abusive words or spam links.
For example, trying to prevent those bad things in 'sampleinputfield' below:
allow create: if
request.auth != null &&
request.resource.data.userId == request.auth.uid &&
request.resource.data.sampleinputfield is string &&
request.resource.data.sampleinputfield.size() < 80 &&
request.resource.data.sampleinputfield. <<something here to block spam, malicious code, etc>>;
I'm aware of how Cloud Functions can cleanup abusive language (see "How do Cloud Functions work? | Get to know Cloud Firestore #11") but I'm looking to see if there's something in rules also.
Thanks for any help
Solution 1:[1]
There isn't built-in filter or so for such cases. You need to write a custom function that uses matches(<REGEX>) to test for any specific words.
Using Cloud Functions also works as in the linked video and it might be easier since you can use any Node packages to validate input rather than writing some Regular Expression. Also you can write the data through a Callable Cloud Function instead of using Cloud Firestore triggers so you can immediately return an error if necessary.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Dharmaraj |
