Changes

Jump to navigation Jump to search
5,158 bytes added ,  13:47, 21 September 2020
no edit summary
{{McNair Projects|Image=|Caption=|Project Title=|Topic AreaHas project output=Tool|Owner=|Start Term=|End Term=|DeliverableHas sponsor=McNair Center|StageHas title=Google Scholar Crawler|Due DateHas owner=Christy Warden,|PriorityHas start date=November 10, 2017|AudienceHas keywords=Google, Scholar,Tool|KeywordsHas project status=Active|Primary BillingDepends upon it=[[PTLR Webcrawler]]
}}
 
==Overview==
===Scholarly===
Another parser of potential interest is [https://github.com/OrganicIrradiation/scholarly scholarly]. However, it produces less information than the scholar parser does.
 
 
 
==Code Written for McNair==
 
===downloadPDFs.py===
====Overview====
downloadPDFs.py is currently being replaced by scholarcrawl.py, located in the same directory. This code exists in E:\McNair\Software\Google_Scholar_Crawler\downloadPDFs.py.
 
This program takes in a key term to search and a number of pages to search on. It seeks information about the papers in this search. It depends on Selenium due to Google Scholar's blocking of traditional crawling. It runs somewhat slowly to prevent getting blocked by the website.
 
====How to Use====
Before you run the program, you should build a file directory that you want all the results to go in. Inside of this directory, you should create a folder called "BibTeX." For example, I could make a folder in E:\McNair\Projects\Patent_Thickets called "My_Crawl." Inside of My_Crawl I should make sure I have a "BibTeX" folder. You should also choose a search terms and how many pages you want to search.
 
Open the program downloadPDFs.py in Komodo. At the very end of the program, type:
 
''main(your query, your output directory, your num pages)''
 
Replace "your query" with the search term you want (like "patent thickets", making sure to include quotes around the term). Replace "your output directory" with the output directory you want these files to go to. Still using my example above, I would type "E:\McNair\Projects\Patent_Thickets\My_Crawl", making sure to include the quotes around the directory. Finally, replace "your num pages" with the number of pages you want to search. Click the play button in the top center of the screen.
 
====What you'll get back====
After the program is done running, go back to the folder you created to see the outputs. First, in your BibTeX folder, you will see a series of files named by the BibTeX keys of papers. Each of these is a text file containing the BibTeX for the paper. In your outer folder, you will have a files called "Query_your query_pdfTable7.txt" where "your query" is your search term and 7 can be replaced with any number. Each of these files is a text file of BibTeX keys in the left column and a link to the PDF for that paper in the other column.
 
====In Progress====
1) Trying to find the sweet spot where we move as fast as possible without being DISCOVERED BY GOOGLE.
 
2) Trying to make it so that if a link to the PDF cannot be found directly on Google, the link to the journal will be saved so that someone can go look it up and try to download it later.
 
====Notes====
All BibTeXs for the papers will be saved, but not all PDFs are available online so not all of the papers viewed will have a link.
 
 
===scholarcrawl.py===
====Overview====
This code is the work-in-progress replacement for downloadPDFs.py. The issue with downloadPDFs was that its impossible to discover the sweet spot of not being discovered by Google, since you cannot find any info online about how many clicks/ how fast gets you marked as a robot. scholarcrawl.py tries to work around the issue by catching every time Google stops us, and waiting 24 hours before trying again, leaving off on the same page you were stopped on previously. It is being tested as of Friday Dec 8, 2017. It is continuing to run as expected as of Dec 12, 2017 and has searched through 34 pages.
 
====How to Use====
Before you run the program, you should build a file directory that you want all the results to go in. Inside of this directory, you should create a folder called "BibTeX." For example, I could make a folder in E:\McNair\Projects\Patent_Thickets called "My_Crawl." Inside of My_Crawl I should make sure I have a "BibTeX" folder. You should also choose a search terms and how many pages you want to search.
 
Open the program downloadPDFs.py in Komodo. At the very end of the program, type:
 
''main(your query, your output directory, your num pages)''
 
Replace "your query" with the search term you want (like "patent thickets", making sure to include quotes around the term). Replace "your output directory" with the output directory you want these files to go to. Still using my example above, I would type "E:\McNair\Projects\Patent_Thickets\My_Crawl", making sure to include the quotes around the directory. Finally, replace "your num pages" with the number of pages you want to search. Click the play button in the top center of the screen.
 
====What you'll get back====
After the program is done running, go back to the folder you created to see the outputs. First, in your BibTeX folder, you will see a series of files named by the BibTeX keys of papers. Each of these is a text file containing the BibTeX for the paper. In your outer folder, you will have a files called "Query_your query_pdfTable7.txt" where "your query" is your search term and 7 can be replaced with any number. Each of these files is a text file of BibTeX keys in the left column and a link to the PDF for that paper in the other column.
 
====In Progress====
1) Testing
 
====Notes====
All BibTeXs for the papers will be saved, but not all PDFs are available online so not all of the papers viewed will have a link.

Navigation menu