Ability to ignore some URLs using meta tags or HTTP header

If you provided a way to tell the website scanner to ignore some pages using a HTML meta tag or an HTTP header, it would make it easier to ignore pages when there are 1000s of them.

My website has over 10000 blog posts, and most of these posts are not trainable. Provided that I had a way to tell the scanner to ignore the links using code or servers, it would save a lot of time.

It could be the same way as the meta robots and X-Robots-Tag from Google, but with your own naming.

Please authenticate to join the conversation.

Upvoters
Status

In Review

Board

πŸ’‘ Feature Request

Date

Over 1 year ago

Author

Guillaume

Subscribe to post

Get notified by email when there are changes.