Demo Day Page Parser
|Demo Day Page Parser|
|Has title||Demo Day Page Parser|
|Has owner||Peter Jalbert|
|Has start date|
|Has deadline date|
|Has project status||Subsume|
|Dependent(s):||Demo Day Page Google Classifier, U.S. Seed Accelerators|
|Subsumed by:||Accelerator Demo Day|
|Has sponsor||McNair Center|
|Has project output||Tool|
|Copyright © 2019 edegan.com. All Rights Reserved.|
The goal of this project is to leverage data mining with Selenium and Machine Learning to get good candidate web pages for Demo Days for accelerators. Relevant information on the project can be found on the Accelerator Data page.
The code directory for this project can be found:
The Selenium-based crawler can be found in the file below. This script runs a google search on accelerator names and keywords, and saves the urls and html pages for future use:
A script to rip from HTML to TXT can be found below. This script reads HTML files from the DemoDayHTML directory, and writes them to the DemoDayTxt directory:
A script to match Keywords (Accelerator and Cohort names) against the resulting text pages can be found in KeyTerms.py. The script takes the Keywords located in CohortAndAcceleratorsFullList.txt, and the text files in DemoDayTxt, and creates a file with the number of matches of each keyword against each text file.
The script can be found:
The Keyword matches text file can be found:
A script to determine the text files of webpages that have at least one hit of these key words can be found:
Downloading HTML Files with Selenium
The code for utilizing Selenium to download HTML files can be found in the DemoDayCrawler.py file.
The initial observation set over the data scraped 100 links for each of 20 sample accelerators from the list of overall accelerators. These sample pages were turned to text, and scored to remove web pages with no mention of relevant accelerators or companies.
Once the process was tweaked in response to the initial sample testing, the process ran again over all accelerators. The test determined that we needed take no more than 10 links for each accelerator, and that 'Demo Day' was a suitable search term.
These files hold data for all the accelerators: not just the test set.
The full list of accelerators:
The full list of search terms to match with the text versions of news articles:
A list of accelerators, queries, and urls:
A directory with HTML files for all accelerator demo day results:
A directory with TXT files for all accelerator demo day results:
A file with the name of the results that passed keyword matching:
A file with an analysis of the most frequent matched words in each text file:
The first pass through the data revealed articles that had thousands of hits for keyword matches. This seemed highly suspicious, so we dug in deeper to investigate the cause of this issue.
The following script in the same directory analyzes the keyword matches to determine the words with the highest number of hits.
After investigation, it was found that many company names were taken after common english words. Here are some of the companies causing issues along with their associated accelerator:
Matter, This., website
Fledge, HERE, website
LightBank Start, Zero
Entrepreneurs Roundtable Accelerator, SELECT
Y Combinator, Her
Y Combinator, Final
Rather than removing these companies from the list of search terms, we opted to not include as search terms any words that were considered among the top 10000 most common English words. For reference, we used the top 10000 most common English words according to a Google research study. The github documentation of the study can be found here.
The file containing the 10000 most common English words can be found:
The results seemed much more plausible after removing these words. Some company words still appeared many times, but in the correct context.