This article is about the legacy Apify Crawler product, which is being phased out and replaced by the apify/legacy-phantomjs-crawler actor. Note that this tutorial can be applied to the new actor as well.

For new projects, we recommend using the newer
apify/web-scraper actor that is based on the modern headless Chrome browser, and offers more features and performance. For complete control of the crawling, you might also consider developing a new actor in Node.js using Apify SDK

In this tutorial, we will demonstrate step by step how to set up a basic crawler with Apify. We recommend you try the steps yourself in a separate browser window. All you need is a basic knowledge of HTML, JavaScript, CSS and ideally jQuery.

The tutorial is divided into four chapters:

  1. Getting started
  2. First data
  3. Scraping full page content
  4. Crawling multiple pages

What is our goal?

Our goal is to create a crawler that can extract the first 150 articles from the front page of Hacker News. The source website looks like this:

We want to extract the data from the website in a structured JSON format:

  "rank": "1.",
  "title": "From inside Facebook",
  "link": "",
  "score": "216",
  "author": "ISL",
  "time": "2 hours ago",
  "comments": "53",
  "url": ""
  "rank": "2.",
  "title": "That awkward moment when Apple mocked good hardware and poor people",
  "link": "",
  "score": "801",
  "author": "nkurz",
  "time": "5 hours ago",
  "comments": "406",
  "url": ""
  "rank": "3.",
  "title": "Akin's Laws of Spacecraft Design*",
  "link": "",
  "score": "130",
  "author": "khet",
  "time": "4 hours ago",
  "comments": "28",
  "url": ""

Create a new crawler

First, please sign in to Apify and and go to the Crawlers section, where you will see copies of example crawlers from our front page. Click the Add new button to create a new crawler.

You will see a page with crawler settings, which is divided into 4 sections:

  • Basic settings shows basic properties of the crawler, such as start URLs, pseudo-URLs and a page function
  • Advanced settings contains detailed crawler configuration
  • API describes how to start the crawler and fetch results using an API
  • Run console helps you develop and debug your crawler

First run

So, let's dive in. First, fill in Custom ID to give a name to your new crawler, for example "Hacker News - my very first crawler". Then add "" to Start URLs to let the crawler know which web page it should open and click save button. And that's all you need to perform the first dry run of your crawler, because Page function is pre-defined for you..

Your configuration should look as follows:

Go to Run console and click the Run button. After a few seconds, you should see a screenshot of the front page of Hacker News in the Page tab:

Note that the crawler only loaded a single web page, because we didn't tell it how to find the next pages - we will address this issue later.

Now let's have a look at the Results tab, which contains the structured data extracted from the page. As you can see, it only contains dummy values because we only used the pre-filled Page function that did not extract any meaningful values from the web page. In the next chapter, we will fix this.

Did this answer your question?