Difference between revisions of "Listing Page Classifier"

From edegan.com
Jump to navigation Jump to search
Line 4: Line 4:
 
|Has project status=Active
 
|Has project status=Active
 
}}
 
}}
 
== Text Processing==
 
 
There are two possible classification methods for the processing the text of target HTML pages. The first is a "Bag of Words" approach, which uses Term Frequency – Inverse Document Frequency to do basic natural language processing and select words or phrases which have discriminant capabilities. The second is a Word2Vec approach which uses shallow 2 layer neural networks to reduce descriptions to a vector with high discriminant potential. (See "Memo for Evan" in E:\mcnair\Projects\Incubators for further detail.)
 
  
 
== Main Tasks ==
 
== Main Tasks ==
Line 17: Line 13:
 
# URL Crawler
 
# URL Crawler
 
  E:\projects\listing page identifier\urlcrawler.py
 
  E:\projects\listing page identifier\urlcrawler.py
 +
 +
=== Image Processing ===
 +
 +
This method would likely rely on a [https://en.wikipedia.org/wiki/Convolutional_neural_network convolutional neural network (CNN)] to classify HTML elements present in web page screenshots. Implementation could be achieved by combining the VGG16 model or ResNet architecture with batch normalization to increase accuracy in this context.

Revision as of 14:51, 30 March 2019


Project
Listing Page Classifier
Project logo 02.png
Project Information
Has title Listing Page Classifier
Has owner Nancy Yu
Has start date
Has deadline date
Has project status Active
Copyright © 2019 edegan.com. All Rights Reserved.


Main Tasks

  1. Build a site map generator: output every internal links of input websites
  2. Build a generator that captures screenshot of individual web pages
  3. Build a CNN classifier using Python and TensorFlow

Approaches (IN PROGRESS)

  1. URL Crawler
E:\projects\listing page identifier\urlcrawler.py

Image Processing

This method would likely rely on a convolutional neural network (CNN) to classify HTML elements present in web page screenshots. Implementation could be achieved by combining the VGG16 model or ResNet architecture with batch normalization to increase accuracy in this context.