Difference between revisions of "Harrison Brown (Work Log)"
Jump to navigation
Jump to search
(15 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | ===Fall 2017=== | ||
+ | <onlyinclude> | ||
[[Harrison Brown]] [[Work Logs]] [[Harrison Brown (Work Log)|(log page)]] | [[Harrison Brown]] [[Work Logs]] [[Harrison Brown (Work Log)|(log page)]] | ||
− | + | 2017-11-29: | |
− | + | *Got the tab-delimited text files written for USITC data. Added detail to project page. | |
− | + | 2017-11-29: | |
+ | *Finishing up converting JSON to tab-delimited text, see USITC/JSON_scraping_python. Worked on creating images with ArcGIS | ||
− | + | 2017-11-13: | |
− | + | *Worked on getting JSON to tab-delimited text | |
− | + | 2017-11-01: | |
+ | *Looked at Oliver's code. Got git repository set up for the project on Bonobo. Started messing around with reading the XML documents in Java. | ||
− | + | 2017-10-30: | |
+ | *Worked on seeing what data can be gathered from the CSV and XML files. Started project page for project. | ||
− | + | 2017-10-26: | |
+ | *Met with Ed to talk about the direction of the project. Starting to work on extracting information from the XML files. Working on adding documentation to wiki and work log. Looking into work from other projects that may use XML. | ||
− | + | 2017-10-25: | |
+ | *Found information about a USITC database that we could use. Added this information to the wiki, and updated information on USITC wiki page. | ||
− | + | 2017-10-19: | |
+ | *Continued to look into NLTK. Talked with Ed about looking into alternative approaches to gathering this data. | ||
− | + | 2017-10-18: | |
+ | *Trying to figure out the best way to extract respondents from the documents. Right now using exclusively NLTK will not get us any more accuracy that using regular expressions. Currently neither will allow us to match every entity correctly so trying to figure out alternate approaches. | ||
+ | 2017-10-16: | ||
+ | *NLTK | ||
+ | ** NLTK Information | ||
+ | *** Need to convert text to ascii. Had issues with my PDF texts and had to convert | ||
+ | *** Can use sent_tokenize() function to split document into sentences, easier that regular expressions | ||
+ | *** Use pos_tag() to tag the sentences. This can be used to extract proper nouns | ||
+ | **** Trying to figure out how to use this to grab location data from these documents | ||
+ | *** Worked with Peter to try to extract geographic information from the documents. We looked into tools Geograpy and GeoText. Geograpy does not have the functionality that we would like. GeoText looks to be better but we have issues with dependencies. Will try to resolve these next time. | ||
− | + | 2017-10-11: | |
+ | *Started to use NLTK library for gathering information to extract respondents. See code in Projects/USITC/ProcessingTexts | ||
− | + | 2017-10-05: | |
− | + | *Made photos for the requested maps in ArcGIS with Peter and Jeemin. | |
− | 10 | ||
− | |||
− | |||
To access: | To access: | ||
Go to E:\McNair\Projects\Agglomeration\HarrisonPeterWorkArcGIS | Go to E:\McNair\Projects\Agglomeration\HarrisonPeterWorkArcGIS | ||
The photos can be found in there | The photos can be found in there | ||
To generate the photos open ArcMap with the beginMapArc file | To generate the photos open ArcMap with the beginMapArc file | ||
− | To generate a PNG Click, File | + | To generate a PNG Click, File, Export to export the photos |
+ | To adjust the data right click on the table name in the layers lab, and hit properties, then query builder | ||
+ | |||
+ | 2017-10-04: | ||
+ | *Worked with Peter on connecting ArcGIS to the database and displaying different points in ArcGIS | ||
+ | |||
+ | 2017-10-02: | ||
+ | *Started work with ArcGIS. Got the data with startups from Houston into the ArcGIS application. For notes see McNair/Porject/Agglomeration | ||
+ | |||
+ | 2017-09-28: | ||
+ | *Helped Christy with set up on Postgres server. Looked through text documents to see what information I could gather. Looked at Stanford NLTK library for extracting the respondents from the documents. | ||
+ | |||
+ | 2017-09-28: | ||
+ | *Got the PDFS parsed to text. Some of the formatting is off will need to determine if data can still be gathered. | ||
+ | |||
+ | 2017-09-25: | ||
+ | *Got 3000 PDFS downloaded. Script works. Completed a task to get emails for people who had written papers about economics and entrepreneurship. Started work on pasring the PDFS to text | ||
+ | |||
+ | 2017-09-20: | ||
+ | *Shell program did not work. Create Python program that can catch all exceptions (url does not exist, lost connection, and improperly formatted url) Hopefully it will complete with no problems. This program is found in the database server under the USITC folder. | ||
+ | *Got connected to the database server and mounted the drive onto my computer. Got the list of all the PDFS on the website and started a shell script on the database server to download all of the PDFs. I will leave it running overnight hopefully it completes by tomorrow. | ||
+ | |||
+ | 2017-09-17: | ||
+ | *Added features to python program to pull the dates in numerical form. Worked on pulling the PDFs from the website. Currently working on pulling them in Python. The program can run and pull PDFs on my local machine but it doesn't work on the Remote Desktop. I will work on this next time. | ||
+ | |||
+ | 2017-09-14: | ||
+ | *Have a python program that can scrape the entire webpage and navigate through all of the pages that contain section 337 documents. You can see these files and more information on the USITC project page. It can pull all of the information that is in the HTML that can be gathered for each case. The PDFs now need to be scraped; will start work on that next time. Generated a csv file with more than 4000 entries from the webpage. There is a small edge case I need to fix where the entry does not contain the Investigation No. | ||
+ | |||
+ | 2017-09-13: | ||
+ | *Worked on parsing the USITC website Section 337 Notices. Nearly have all of the data I can scrape. Scraper works, but there are a few edges | ||
+ | *cases where information in the tables are part of a Notice but do not have Investigation Numbers. Will finish this hopefully next time. Also added my USITC project to the projects page I did not have it linked | ||
+ | |||
+ | 2017-09-11: | ||
+ | Met with Dr. Egan and got assigned project. Set Up Project Page USITC, Started Coding in Python for the Web Crawler. Look in McNair/Projects/UISTC for project notes and code. | ||
+ | |||
+ | 2017-09-07: | ||
+ | Set Up Work Log Pages, Slack, Microsoft Remote Desktop | ||
+ | </onlyinclude> | ||
[[Category:Work Log]] | [[Category:Work Log]] |
Latest revision as of 16:00, 30 November 2017
Fall 2017
Harrison Brown Work Logs (log page)
2017-11-29:
- Got the tab-delimited text files written for USITC data. Added detail to project page.
2017-11-29:
- Finishing up converting JSON to tab-delimited text, see USITC/JSON_scraping_python. Worked on creating images with ArcGIS
2017-11-13:
- Worked on getting JSON to tab-delimited text
2017-11-01:
- Looked at Oliver's code. Got git repository set up for the project on Bonobo. Started messing around with reading the XML documents in Java.
2017-10-30:
- Worked on seeing what data can be gathered from the CSV and XML files. Started project page for project.
2017-10-26:
- Met with Ed to talk about the direction of the project. Starting to work on extracting information from the XML files. Working on adding documentation to wiki and work log. Looking into work from other projects that may use XML.
2017-10-25:
- Found information about a USITC database that we could use. Added this information to the wiki, and updated information on USITC wiki page.
2017-10-19:
- Continued to look into NLTK. Talked with Ed about looking into alternative approaches to gathering this data.
2017-10-18:
- Trying to figure out the best way to extract respondents from the documents. Right now using exclusively NLTK will not get us any more accuracy that using regular expressions. Currently neither will allow us to match every entity correctly so trying to figure out alternate approaches.
2017-10-16:
- NLTK
- NLTK Information
- Need to convert text to ascii. Had issues with my PDF texts and had to convert
- Can use sent_tokenize() function to split document into sentences, easier that regular expressions
- Use pos_tag() to tag the sentences. This can be used to extract proper nouns
- Trying to figure out how to use this to grab location data from these documents
- Worked with Peter to try to extract geographic information from the documents. We looked into tools Geograpy and GeoText. Geograpy does not have the functionality that we would like. GeoText looks to be better but we have issues with dependencies. Will try to resolve these next time.
- NLTK Information
2017-10-11:
- Started to use NLTK library for gathering information to extract respondents. See code in Projects/USITC/ProcessingTexts
2017-10-05:
- Made photos for the requested maps in ArcGIS with Peter and Jeemin.
To access: Go to E:\McNair\Projects\Agglomeration\HarrisonPeterWorkArcGIS The photos can be found in there To generate the photos open ArcMap with the beginMapArc file To generate a PNG Click, File, Export to export the photos To adjust the data right click on the table name in the layers lab, and hit properties, then query builder
2017-10-04:
- Worked with Peter on connecting ArcGIS to the database and displaying different points in ArcGIS
2017-10-02:
- Started work with ArcGIS. Got the data with startups from Houston into the ArcGIS application. For notes see McNair/Porject/Agglomeration
2017-09-28:
- Helped Christy with set up on Postgres server. Looked through text documents to see what information I could gather. Looked at Stanford NLTK library for extracting the respondents from the documents.
2017-09-28:
- Got the PDFS parsed to text. Some of the formatting is off will need to determine if data can still be gathered.
2017-09-25:
- Got 3000 PDFS downloaded. Script works. Completed a task to get emails for people who had written papers about economics and entrepreneurship. Started work on pasring the PDFS to text
2017-09-20:
- Shell program did not work. Create Python program that can catch all exceptions (url does not exist, lost connection, and improperly formatted url) Hopefully it will complete with no problems. This program is found in the database server under the USITC folder.
- Got connected to the database server and mounted the drive onto my computer. Got the list of all the PDFS on the website and started a shell script on the database server to download all of the PDFs. I will leave it running overnight hopefully it completes by tomorrow.
2017-09-17:
- Added features to python program to pull the dates in numerical form. Worked on pulling the PDFs from the website. Currently working on pulling them in Python. The program can run and pull PDFs on my local machine but it doesn't work on the Remote Desktop. I will work on this next time.
2017-09-14:
- Have a python program that can scrape the entire webpage and navigate through all of the pages that contain section 337 documents. You can see these files and more information on the USITC project page. It can pull all of the information that is in the HTML that can be gathered for each case. The PDFs now need to be scraped; will start work on that next time. Generated a csv file with more than 4000 entries from the webpage. There is a small edge case I need to fix where the entry does not contain the Investigation No.
2017-09-13:
- Worked on parsing the USITC website Section 337 Notices. Nearly have all of the data I can scrape. Scraper works, but there are a few edges
- cases where information in the tables are part of a Notice but do not have Investigation Numbers. Will finish this hopefully next time. Also added my USITC project to the projects page I did not have it linked
2017-09-11: Met with Dr. Egan and got assigned project. Set Up Project Page USITC, Started Coding in Python for the Web Crawler. Look in McNair/Projects/UISTC for project notes and code.
2017-09-07: Set Up Work Log Pages, Slack, Microsoft Remote Desktop