If you provided a way to tell the website scanner to ignore some pages using a HTML meta tag or an HTTP header, it would make it easier to ignore pages when there are 1000s of them.
My website has over 10000 blog posts, and most of these posts are not trainable. Provided that I had a way to tell the scanner to ignore the links using code or servers, it would save a lot of time.
It could be the same way as the meta robots and X-Robots-Tag from Google, but with your own naming.
Please authenticate to join the conversation.
In Review
π‘ Feature Request
Over 1 year ago
Guillaume
Get notified by email when there are changes.
In Review
π‘ Feature Request
Over 1 year ago
Guillaume
Get notified by email when there are changes.