WebCrawler Engine in C# (first draft)

April 7, 2006 | Uncategorized | 7 Comments

A few weeks ago, I wrote about using SearchAroo as a spider to index
a site with DotLucene
. I've written a new WebCrawler using SearchAroo as a base and turned
it into a library that can be reused for other applications.

Download Web Crawler (zip file with WebCrawler engine and sample web and forms apps)

Here are
the improvements I've made:

  1. Gets text from the following HTML tag attributes: alt, title, summary, longdesc
  2. Better ability to determine relative URLs
  3. The
    WebDocument object keeps record of all files it links out to, including
    external and internal links, as well as images. This is useful for
    determining if your site has missing images or outgoing links.
  4. Compiled
    into a reusable library (the author of SearchAroo didn't want to have a
    dll, but I feel it's much more usable this way) which means it can be
    plugged into any indexing framework or used for other purposes such as
    simple link checking.

Here is the basic code to get it running:

string baseUrl = "http://mywebsite.com/";

CrawlerEngine crawler = new CrawlerEngine();

crawler.OnDocumentLoaded += new DocumentHandler(crawler_OnDocumentLoaded);


void crawler_OnDocumentLoaded(WebDocumentBase webDocument, int level) {

    // do indexing code

    // WebDocumentBase is a base class for all documents that are downloaded and spidered
    // it has the following properties (Uri, ContentType, MimeType, Encoding, Length, TextData, InternalLinks, ExternalLinks, ImageSrcs)

    // if the file an HTML file, then it can be cast as an HtmlDocument

    // with the following additional properties (Title, Description, Keywords, Html)

    // future additions will hopefully have plugins for PdfDocument and WordDocument

Future things I'd like to add:

  1. Other document types (PDF, Word, other Office formats) for indexing like DotLucene's indexer.
  2. More events to help steer the crawling
  3. Weight to heading tags (h1, h2, etc.)

Please note, the namespace "Refresh.Web" is for a future business
endeavor. The code is released with an CC-attributive license. If you're interested using it, please leave a comment on additional features you'd like to see.

7 responses to “WebCrawler Engine in C# (first draft)”

  1. This post is a bit dated, but if you ever get the notion to investigate crawling any further see: http://arachnode.net – an open source site crawler written in C# using SQL Server 2005.

  2. Fabien says:

    Your sample is too simple even if it is a beginning.

    You don’t manage Proxy & Credential, I work with a Proxy that have an identification access, and your code don’t work. If I will find a solution, I will push it to you.

  3. jon says:

    let me see

  4. Hey –

    I just promoted arachnode.net to release/stable status, for those that are interested!


  5. Bluesummers says:

    I realize it pales in comparison to Arachnode, but I also wrote a small crawler. It’s at http://www.CuteCrawler.com . ^_^

  6. Thomas says:

    The download link is inactive

  7. Betty Clark says:

    I have read a lot of the comments and I just wonder why people say the things they do, I mean they can find the bad in anything. I guess that is where we are in this world. Just hurt hurt hurt, no matter what the subject is. Lawrence Williams http://www.trybw.com Fort Myers, Naples, Bonita,Cape Coral Computer Repair Service

Hi, I'm John Dyer. In my day job, I build websites and create online seminary software for a seminary in Dallas. I also like to release open source tools including a pretty popular HTML5 video player and build tools that help people find best bible commentaries and do bible study. And just for fun, I also wrote a book on the theology of technology and media.

Fork me on GitHub

Social Widgets powered by AB-WebLog.com.