Difference between revisions of "Demo Day Page Parser"

From edegan.com
Jump to navigation Jump to search
Line 12: Line 12:
 
  E:\McNair\Software\Accelerators
 
  E:\McNair\Software\Accelerators
  
The Selenium-based crawler can be found in the file:
+
The Selenium-based crawler can be found in the file below. This script runs a google search on accelerator names and keywords, and saves the urls and html pages for future use:
 
  DemoDayCrawler.py
 
  DemoDayCrawler.py
This script runs a google search on accelerator names and keywords, and saves the urls and html pages for future use.
 
  
A script to rip from HTML to TXT can be found:
+
 
 +
A script to rip from HTML to TXT can be found below. This script reads HTML files from a directory, and writes them to TXT in another directory:
 
  htmlToText.py
 
  htmlToText.py
This script reads HTML files from a directory, and writes them to TXT in another directory.
 

Revision as of 17:20, 15 November 2017


McNair Project
Demo Day Page Parser
Project logo 02.png
Project Information
Project Title Demo Day Page Parser
Owner Peter Jalbert
Start Date
Deadline
Primary Billing
Notes
Has project status Active
Copyright © 2016 edegan.com. All Rights Reserved.


Project Specs

The goal of this project is to leverage data mining with Selenium and Machine Learning to get good candidate web pages for Demo Days for accelerators. Relevant information on the project can be found on the Accelerator Data page.

Code Location

The code directory for this project can be found:

E:\McNair\Software\Accelerators

The Selenium-based crawler can be found in the file below. This script runs a google search on accelerator names and keywords, and saves the urls and html pages for future use:

DemoDayCrawler.py


A script to rip from HTML to TXT can be found below. This script reads HTML files from a directory, and writes them to TXT in another directory:

htmlToText.py