I am working on a struts2 based web application and want to stop web crawlers from visiting my application.
Asked
Active
Viewed 192 times
2 Answers
0
To stop requesting from crawlers machines, you must know the IP address of that machines. Then you can create new filter for your application. You can get IP of any request to your app in Filter. If the IP mapping with IP address of crawler. Kick that request out your apps. Hope this help.

Manh Trinh
- 56
- 4
0
You could host a specific file robots.txt
that are used by most respected crawlers. See here.
The Robot Exclusion Standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to advising cooperating web crawlers and other web robots about accessing all or part of a website which is otherwise publicly viewable.
Note this will not stop everybody, but that would be very hard/impossible to do anyway.

geert3
- 7,086
- 1
- 33
- 49