Changes

Jump to navigation Jump to search
108 bytes added ,  13:47, 21 September 2020
no edit summary
{{Project|Has project output=Tool|Has sponsor=McNair ProjectsCenter
|Has title=Accelerator Demo Day
|Has owner=Minh Le,
==Amazon Mechanical Turk==
TherePlease refer to: [[Amazon Mechanical Turk for Analyzing Demo Day Classifier's a file in the folder CrawledHTMLFullcalled FinalResultWithURLthat was manually created by combining the file crawled_demoday_page_list.txtin the mother folder and the file predicted.txtThis file combined the predictions to the actual url of the websites. Results]]
Since MTurk makes it hard for us to display the downloaded HTML, it is much faster to just copy the url into the question box rather than trying to display the downloaded HTML.
==Hand Collecting Data== To crawl, we only looked for data on accelerators which did not receive venture capital data (which Ed found via VentureXpert) and lacked timing info. The advantage to purpose of this crawl is to find timing info where we cannot find it otherwise, and if a company received VC we can find timing info via that some websites, such as techcrunchinvestment.com behaves abnormally when downloaded The file we used to find instances in which we lack timing info and lacked VC is: /bulk/McNair/Projects/Accelerators/Summer 2018/Merged W Crunchbase Data as HTML so opening these kinds of websites July 17.xlsx We filtered this sheet in Excel (and checked our work by filtering in the browser would actually be more beneficial because the UI would not be messed upSQL) and found 809 companies that lacked timing info and didn't receive VC. MoreoverFrom this, if certain websites has paywall or pop-up ads, the user can also click out of it. Since most of the times, paywall or pop-ups are scripts within HTMLs, the classifier can't rule them out because the body of the HTMLs may still contain useful information we are looking found 74 accelerators which we needed to crawl for. Major paywalls or websites that required log-ins such as linkedin have been black-listed in  We used the crawler. More detail in the crawler section belowto search for cohort companies listed for these accelerators.
However. there is a disadvantage to this: websites are ever changing, so there is a possibility that in During the futureinitial test run, the URL may not be usable, or has changed to something else; on the other number of good pages was 359. The data is then handled by hand, downloaded HTMLs remain the same because it does not require any internet connection to render and thus, the content is staticby fellow interns.
To create the MTurk The file for this project, follow this tutorial hand-coding is in [[Mechanical Turk (Tool)]]. For testing and development purpose, use https: /bulk/requestersandbox.mturk.comMcNair/Projects/Accelerator Demo Day/Test Run/CrawledDemoDayHTMLFull/'''FinalResultWithURL'''
Test accountFor the sake of collaboration, the team copied this information to a Google Sheet, accessible here:email https: mcboatfaceboaty670@gmail//docs.google.compassword: sameastheoneforemail2018/spreadsheets/d/16Suyp364lMkmUuUmK2dy_9MeSoS1X4DfFl3dYYDGPT4/edit?usp=sharing
For this project, all We split the fields that was asked of process into four parts. Each interns will do the user isfollowing:
*Whether 1. Go to the page had a list of companies going through an accelerator*The month and year of the demo day (or article)*Accelerator name*Companies going through acceleratorgiven URL.
Layout:2. Record whether the page is good data (column F); this can later be used by [[Minh Le]] to refine/fine-tune training data.
[[File:Screen Shot 2018-07-25 at 113.37Record whether the page is announcing a cohort or recapping/explaining a demo day (column G).02 AMThis variable will be used to decide if we should subtract weeks from the given date (e.g. if it is recapping a demo day, the cohort went through the accelerator for the past ~12 weeks, and we should subtract weeks as such).png]]
4. Record date, month, year, and the companies listed for that given accelerator.
==Hand Collecting Data==5. Note any any information, such as a cohort's special name.During Once this process is finished, we will filter only the initially test run1s in Column F, the number of good pages was 359. and [[Connor Rothschild]] and [[Maxine Tao]] will work to populate empty cells in The File to Rule Them All with that data is then handled by hand by fellow interns.
Connor, edit information here.
==Advance User Guide: An in-depth look into the project and the various settings==
The RNN currently has a ~50% accuracy on both train and est data, which is rather concerning.
Test : train ration ratio is 1:3 (25/75)
Both model is currently using the Bag-of-word approach to preprocess data, but I will try to use Yang's code in the industry classifier to preprocess using word2vec. I'm not familiar with this approach, but I will try to learn this.

Navigation menu