Skip to content

Latest commit

 

History

History
16 lines (12 loc) · 536 Bytes

README.md

File metadata and controls

16 lines (12 loc) · 536 Bytes

python-web-crawler

Web crawler in python

This crawler is used to create repositories of URLs from the given crawling URL. To execute this crawler you need following packages: 1.urllib 2.urlparse 3.logging 4.BeautifulSoup

After checking whther all packes are available, execute below command into your python environment: python web_crawler.py [optional argument one: number of links to be crawled] [optional argument two: crawling URL default is http://python.org/]

You can stop this program by pressing Ctl + c.