Scrapy is mostly used to scrape data from websites and the common way of presenting data on websites are with the use of table.

An HTML table starts with a table tag with each row defined with tr and column with td tags respectively. Optionally thead is used to group the header rows and tbody to group the content rows.

To scrape data from HTML table, basically we need to find the table that we're interested on in a website and iterate for each row the columns that we want to get our data from.

Steps to scrape HTML table using Scrapy:

  1. Go to the web page that you want to scrape the table data from using your web browser.

    For this example we're to scrape Bootstrap's Table documentation page

  2. Inspect the element of the table using your browser's built-in developer tools or by viewing the source code.

    In this case, the table is assigned the classes of table and table-striped Here's the actual HTML code for the table

    <table class="table table-striped">
      <thead>
        <tr>
          <th scope="col">#</th>
          <th scope="col">First</th>
          <th scope="col">Last</th>
          <th scope="col">Handle</th>
        </tr>
      </thead>
      <tbody>
        <tr>
          <th scope="row">1</th>
          <td>Mark</td>
          <td>Otto</td>
          <td>@mdo</td>
        </tr>
        <tr>
          <th scope="row">2</th>
          <td>Jacob</td>
          <td>Thornton</td>
          <td>@fat</td>
        </tr>
        <tr>
          <th scope="row">3</th>
          <td>Larry</td>
          <td>the Bird</td>
          <td>@twitter</td>
        </tr>
      </tbody>
    </table>
  3. Launch Scrapy shell at the terminal with the web page URL as an argument.
    $ scrapy shell https://getbootstrap.com/docs/4.0/content/tables/
  4. Check HTTP response code to see if the request was successful.
    >>> response
    <200 https://getbootstrap.com/docs/4.0/content/tables/>

    200 is the OK success respond status code for HTTP.

  5. Search for the table you're interested in using the xpath selector.
    >>> table = response.xpath('//*[@class="table table-striped"]')
    >>> table
    [<Selector xpath='//*[@class="table table-striped"]' data=u'<table class="table table-striped">\n  <t'>]

    In this case the table is assigned table table-striped CSS classes and that's what we use as our selector.

  6. Narrow down search to tbody if applicable.
    >>> table = response.xpath('//*[@class="table table-striped"]//tbody')
    >>> table
    [<Selector xpath='//*[@class="table table-striped"]//tbody' data=u'<tbody>\n    <tr>\n      <th scope="row">1'>]
  7. Get the table rows by searching for tr.
    >>> rows = table.xpath('//tr')
    >>> rows
    [<Selector xpath='//tr' data=u'<tr>\n      <th scope="col">#</th>\n      '>, <Selector xpath='//tr' data=u'<tr>\n      <th scope="row">1</th>\n      '>, <Selector xpath='//tr' data=u'<tr>\n      <th scope="row">2</th>\n      '>, <Selector xpath='//tr' data=u'<tr>\n      <th scope="row">3</th>\n      '>,
    #---snipped---
  8. Select a row to test.
    >>> row = rows[2]

    Multiple rows are stored as an array.

  9. Access the row's column via the <td> selector and extract column's data.
    >>> row.xpath('td//text()')[0].extract()
    u'Jacob'

    The first column uses <th> instead of <td> thus our array index starts at the First column of the table.

  10. Combine everything into a complete code by iterating each rows with a for loop.
    >>> for row in response.xpath('//*[@class="table table-striped"]//tbody//tr'):
    ...     name = {
    ...         'first' : row.xpath('td[1]//text()').extract_first(),
    ...         'last': row.xpath('td[2]//text()').extract_first(),
    ...         'handle' : row.xpath('td[3]//text()').extract_first(),
    ...     }
    ...     print(name)
    ...
    {'handle': u'@mdo', 'last': u'Otto', 'first': u'Mark'}
    {'handle': u'@fat', 'last': u'Thornton', 'first': u'Jacob'}
    {'handle': u'@twitter', 'last': u'the Bird', 'first': u'Larry'}
  11. Create a Scrapy spider from the previous codes (optional).
    import scrapy
     
    class BootstrapTableSpider(scrapy.Spider):
        name = "bootstrap_table"
     
        def start_requests(self):
            urls = [
                'https://getbootstrap.com/docs/4.0/content/tables/',
            ]
            for url in urls:
                yield scrapy.Request(url=url, callback=self.parse)
     
        def parse(self, response):
            for row in response.xpath('//*[@class="table table-striped"]//tbody/tr'):
                yield {
                    'first' : row.xpath('td[1]//text()').extract_first(),
                    'last': row.xpath('td[2]//text()').extract_first(),
                    'handle' : row.xpath('td[3]//text()').extract_first(),
                }
  12. Run the spider with JSON output.
    $ scrapy crawl --nolog -o - -t json bootstrap_table
    [
    {"last": "Otto", "handle": "@mdo", "first": "Mark"},
    {"last": "Thornton", "handle": "@fat", "first": "Jacob"},
    {"last": "the Bird", "handle": "@twitter", "first": "Larry"}
    ]
Discuss the article:

Comment anonymously. Login not required.

Share!