1,437 bytes added
, 15:44, 25 July 2018
{{McNair Projects
|Has title=Patent Thicket
|Has owner=Grace Tan
|Has start date=Summer 2018
|Has keywords=
|Has project status=Active
|Is dependent on=Google Scholar Crawler, pdfdownloader.py, pdf_to_bulk_PTLR.py
}}
===Location of Files===
E:://McNair/Software/Patent_Thicket
Downloaded PDFs:
E://McNair/Projects/Software/Patent_Thicket/AllPDFs/successful_downloads
Converted PDFs to txt files:
E://McNair/Projects/Software/Patent_Thicket/Parsed_Texts
===Google Scholar Crawler===
used [[]]
I used the selenium box and switched from Rice Visitor, Rice Owls, and eduroam to prevent Google Scholar from blocking me.
I downloaded 613 pdf urls and 958 bibtex filees from 100 pages on Google Scholar when searching for "patent thicket."
===Downloading PDFs===
Used pdfdownloader.py
I tweaked the code to take into account repeat of file names.
5 of the pdf urls were not downloadable so I ended up with 608 working pdfs.
===pdf_to_txt_bulk_PTLR.py===
The code must be run in E because of the libraries it uses is not in Z.
I reinstalled pdfminer which might be a problem in the future if the libraries change.
This program converts all pdfs to txt files. It also generates two files _LOG_ERR.txt and _LOG_RUN.txt that includes the names of the pdfs that could not be converted and were converted successfully. Some of the files that were successfuly converted, especially the very small ones, don't have the text from the paper.