This article relates to the legacy Apify Crawler product, which is being retired in favor of the apify/legacy-phantomjs-crawler actor. All the information in this article is still valid, and applies to both the legacy Crawler product and the new actor. For more information, please read this blog post.

For new projects, we recommend using the newer apify/web-scraper actor that is based on the modern headless Chrome browser.

Apify users sometimes need to submit a form on pages created with ASP.NET (URL typically ends with .aspx). These pages have a different approach for how they submit forms and navigate through pages. This tutorial shows you how to handle these kind of pages. This approach is based on a blog post from Todd Hayton, where he explains how crawlers for ASP.NET pages should work.

First of all, you need to copy&paste this function to your crawler Page function:

var enqueueAspxForm = function(request, formSelector, submitButtonSelector, async) {
    request.postData = $(formSelector).serialize();
    if ($(submitButtonSelector).length) {
        request.postData += decodeURIComponent("&"+$(submitButtonSelector).attr("name") + "=" + $(submitButtonSelector).attr("value"));
    request.postData += decodeURIComponent("&__ASYNCPOST="+async.toString());
    request.method = "POST";
    request.uniqueKey = Math.random();
    return request;

The function has these parameters:

request - the same object you use for context.enqueuePage()

formSelector - selector for a form to be submitted e.g 'form[name="test"]'

submitButtonSelector - selector for a button for submit form e.g. '#nextPageButton'

async - if true, request returns only params not HTML content

Then you can use it in your Page function as follows:

        url: "",
        label: "searchResult"
    }, 'form[name="aspnetForm"]', '#ctl00_ContentPlaceHolder1_btnSearch', false);

As a template you can use this community crawler we’ve shared for you. As always, if you have any questions, we’re just an email away.

Did this answer your question?