Changes

Jump to navigation Jump to search
1,767 bytes added ,  15:00, 12 December 2017
no edit summary
Thursday: 2:15-3:45
 
=Code=
12/12/17: [[Scholar Crawler Main Program]]
=Steps=
Incomplete, struggling to find links.
 
==Keywords List==
 
Find a copy of the Keywords List in the Dropbox: https://www.dropbox.com/s/mw5ep33fv7vz1rp/Keywords%20%3A%20Categories.xlsx?dl=0
=Christy's LOG=
'''10/10'''
 
Found a way to get past google scholar blocking my crawling so spent time writing selenium code. I can certainly download 10 search result BibTeXs when you search for a certain term automatically now which is awesome. I am part of the way through having the crawler save the pdf link once it has saved the BibTex for the search results. Yay selenium :')))
 
Code located at E:/McNair/Software/Google_Scholar_Crawler/downloadPDFs.py
 
'''11/02'''
 
Things are good! Today made the program so that we can get however many pages of search results we want and get the PDF links for all the ones we can see. Towards the end of the day, google scholar picked up that we were a robot and started blocking me. Hopefully the block goes away when I am back on Monday. Now working on parsing apart the txt file to go to the websites we saved and download the PDFs. Should not be particularly difficult.
 
'''11/28'''
 
Basically everything is ready to go, so long as Google Scholar leaves me alone. We currently have a program which will take in a search term and number of pages you want to search. The crawler will pull as many PDFs from this many pages as possible (it'll go slowly to avoid getting caught). Next, it will download all the PDFs discovered by the crawler (also possibly save the links for journals whose PDFs were not linked on scholar). It will then convert all the PDFs to text. Finally, it will search through the paper for a list of terms and for any definitions of patent thickets. I will be making documentation for these pieces of code today.
=Lauren's LOG=
272

edits

Navigation menu