Scraping Google Scholar with Python and BeautifulSoup

Mohan Ganesan
4 min readJul 18, 2020

--

Google Scholar is a tremendous resource for academic resources from across the world wide web. Today lets see how we can scrape Google Scholar results for the search “Web scraping.”

We will use BeautifulSoup to help us extract information, and we will use the Python Requests module to fetch the data.

To start with, this is the boilerplate code we need to get the search results page and set up BeautifulSoup to help us use CSS selectors to query the page for meaningful data.

# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'}
url = 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=web+scraping&btnG='
response=requests.get(url,headers=headers)soup=BeautifulSoup(response.content,'lxml')

We are also passing the user agent headers to simulate a browser call, so we dont get blocked.

Now let’s analyze the Scholar search results for a destination we want. This is how it looks.

and when we inspect the page, we find that each of the items HTML is encapsulated in a

tag with the attribute data-lid with a null value.

We could just use this to break the HTML document into these data-lid elements, which contain individual item information like this…

# -*- coding: utf-8 -*-
from bs4 import BeautifulSoup
import requests
headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'}
url = 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=web+scraping&btnG='
response=requests.get(url,headers=headers)soup=BeautifulSoup(response.content,'lxml')#print(soup.select('[data-lid]'))
for item in soup.select('[data-lid]'):
try:
print('----------------------------------------')
print(item)

except Exception as e:
#raise e
print('')

And when you run it…

python3 scrapeScholar.py

You can tell that the code is isolating the data-lid HTML

On further inspection, you can see that the title of each of the search result items is always inside a tag. So let’s try and retrieve that

# -*- coding: utf-8 -*- from bs4 import BeautifulSoup import requests headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'} url = 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=web+scraping&btnG=' response=requests.get(url,headers=headers) soup=BeautifulSoup(response.content,'lxml') #print(soup.select('[data-lid]')) for item in soup.select('[data-lid]'): try: print('----------------------------------------') #print(item) print(item.select('h3')[0].get_text()) except Exception as e: #raise e print('')

That will get us the titles

Bingo!

Now let’s get the other data pieces

# -*- coding: utf-8 -*- from bs4 import BeautifulSoup import requests headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'} url = 'https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=web+scraping&btnG=' response=requests.get(url,headers=headers) soup=BeautifulSoup(response.content,'lxml') #print(soup.select('[data-lid]')) for item in soup.select('[data-lid]'): try: print('----------------------------------------') #print(item) print(item.select('h3')[0].get_text()) print(item.select('a')[0]['href']) print(item.select('.gs_rs')[0].get_text()) print('----------------------------------------') except Exception as e: #raise e print('')

And when run.

Produces all the info we need, like link and summary, in addition to the title.

In more advanced implementations, you will need even to rotate the User-Agent string, so Google cant tell it the same browser!

If we get a little bit more advanced, you will realize that Google can simply block your IP, ignoring all your other tricks. This is a bummer, and this is where most web crawling projects fail.

Overcoming IP Blocks

Investing in a private rotating proxy service like Proxies API can most of the time make the difference between a successful and headache-free web scraping project, which gets the job done consistently and one that never really works.

Plus, with the 1000 free API calls running an offer, you have almost nothing to lose by using our rotating proxy and comparing notes. It only takes one line of integration to its hardly disruptive.

Our rotating proxy server Proxies API provides a simple API that can solve all IP Blocking problems instantly.

  • With millions of high speed rotating proxies located all over the world
  • With our automatic IP rotation
  • With our automatic User-Agent-String rotation (which simulates requests from different, valid web browsers and web browser versions)
  • With our automatic CAPTCHA solving technology

Hundreds of our customers have successfully solved the headache of IP blocks with a simple API.

A simple API can access the whole thing like below in any programming language.

curl "http://api.proxiesapi.com/?key=API_KEY&url=https://example.com"

We have a running offer of 1000 API calls completely free. Register and get your free API Key here.

The blog was originally posted at: https://www.proxiesapi.com/blog/scraping-google-scholar-with-python-and-beautifuls.html.php

--

--