Changes

Jump to navigation Jump to search
no edit summary
{{Project|Has project output=Data|Has sponsor=McNair ProjectsCenter
|Has title=Redesigning Patent Database
|Has owner=Shelby Bice
|Has deadline=201705
|Has keywords=Database, Patent
|Has notes=|Is dependent on=|Depends upon it=|Has project status=ActiveComplete
}}
Documentation on the process of updating the patent database with new data and eventual designs for the new patent database. Not to be confused with "Patent Data Restructure", which deals with condensing and cleaning USPTO assignees data.
Deliverables by end of semesterNOTE FOR ED: Places I looked for the parser for the USPTO assignment data and did not find it are located in a txt file under McNair/Projects/Redesigning Patent Database called "Notes on where the correct parser for the USPTO Assignment Data is and is not". Hopefully that will help narrow your search or help you determine if you should keep debugging the one we have.  For anyone looking to pick up this project:Ended with notes on how to add data to database (see Current Design and build new Scripts Documentation).Below are some checklist items if we overhauled the database and rebuilt all the tables from scratch, which could be a next step.There is a lot of notes on this page about designing the database and what tables are necessary and which have been created for projects. Feel free to consult these if you are overhauling the database. I would suggest going through the tables in an existing patent database with core and instead removing extraneous tables that no longer have any use instead of rebuilding the entire database, since I determined that the existing schema isn't that convoluted and the process of downloading, parsing, and cleaning data from USPTO, let alone the Harvard Dataverse, is a huge hassle, so there isn't any reason to reinventthe wheel in my opinion.  If you're looking to just pull new data, add tables from Patent Data Restructure USPTO and Small Inventors Projectwant to know the process, see "Current Design and Scripts Documentation."write thorough documentation If you're rewriting any of the Perl scripts that download data from USPTO, specifically USPTO Assignment data (sometimes referred to on schema the wiki as USPTO Assignee Data), please take a look at "Specifications of new patent database USPTO Assignment Data To Extract in Future Perl Scripts." I looked through the DTDs and instructions documentation on what documentation should the USPTO bulk website and found a couple things that we could be extracting from the data we download that could be written when it is altereduseful, like abstracts and more inventor information.
TODO on 4/13/2017
Keep working on updating patent data
* How to parse patent data
* How to parse assignment data
* Also update documentation as I go
== '''Redesigning Patent Database''' ==
==Related Projects==
*[[Patent Assignment Data Restructure]]
*[[Small Inventors Project]] - uses Fee Status and Citations
*[[Medical Centers and Grants]] - uses patent assignees, specifically their zipcodes and organizations
'''Previous documentation on the patent database:'''
'''As of 3/21/2017 the most up-to-date database containing patent data is "patent" not "allpatent" or "allpatent_clone" and "patent" is the database that the the other patent data redesign project, Restructuring Patent Data (link above) is working with. The datbase "allpatent" has since been removed, but it can be restored if it is needed. '''
[http://mcnair.bakerinstitute.org/wiki/Patent_Data_(Wiki_Page) [Patent Data]] - overview of what the data is and where it came from, probably starting point for changing documentation
[http://mcnair.bakerinstitute.org/wiki/[Patent |Patent Database]] - overview of schema of database (specifically, the database patent, which includes data from Harvard dataverse (originally stored in patentdata) and USPTO (patent_2015)
[http://mcnair.bakerinstitute.org/wiki/[USPTOAssigneesData |USPTO Assignees Database]] - enhances assignee info in patent database, also being redesigned
[http://mcnair.bakerinstitute.org/wiki/[Patent_Data_Issues |Problems with Patent Database]] - lists issues with current schema
[http://mcnair.bakerinstitute.org/wiki/[Data_Model |Previous ER Diagram]] - does not match up with schema described in [http://mcnair.bakerinstitute.org/wiki/[Patent |Patent Database]] and contains outdated list of what we want to pull from XML files
[http://mcnair.bakerinstitute.org/wiki/[Patent_Data_Processing_-_SQL_Steps |Processing Patent Data]] - states that allpatent is the newest database and an amalgamation of patentdata or patent_2015
== Description ==
== Development ==
Design will be built upon a relational database model. I will be referencing this article on database design as I develop the design (http://en.tekstenuitleg.net/articles/software/database-design-tutorial/one-to-many.html), and I will be creating an ER diagram using [https://erdplus.com/#/standalone ERDPlus] or [https://creately.com/app/?tempID=hqdgwjki1&login_type=demo# Creately].
 
For notes on the existing database called "patent" and its tables, suggestions on redesigning tables, etc, please see e:/McNair/Projects/Redesigning Patent Database.
== Current Design and Scripts Documentation ==
The following pages are relevant to how previous databases are built/how to build tables in the database:
[http://mcnair.bakerinstitute.org/wiki/[Harvard_Dataverse_Data Harvard Dataverse Data]] - explains how to make tables from Harvard Dataverse data, where to find scripts, etc.
[http://mcnair.bakerinstitute.org/wiki/[USPTO_Bulk_Data_Processing |USPTO Data]] - explains how to make tables from USPTO data, where to find scripts, etc, specifically for assignment data.
[http://mcnair.bakerinstitute.org/wiki/[Patent_Data_Extraction_Scripts_(Tool) |Patent Data Extraction]] - explains locations of XML files and lists (at the bottom) where the Perl scripts can be found
[http://mcnair.bakerinstitute.org/wiki/[Patent_Data_Cleanup_-_June_2016 (June_2016)|Patent Data Cleanup]] - explains changes that were made to clean up problems in the database allpatent as a result of merging the Harvard Dataverse data and the USPTO data
[http://mcnair.bakerinstitute.org/wiki/[Patent_Data_Processing_-_SQL_Steps |Patent Data Processing - SQL Steps]] - explains SQL needed to merge two existing databases, one that contained the Harvard Dataverse data and one that contained the USPTO data
Here at the instructions I'm developing for downloading, parsing, and hopefully adding new data to the database since the documentation is very sparse (can also be found under McNair/Projects/Redesigning Patent Database/Instructions on how to download patent data form USPTO bulk data.
Missing information (represented by NULL):
 
nationality (char sequence)
 
residence (char sequence)
 
asgseq (int)
 
asgtype (int)
====xml_parser2.plx====
Now to parse the data and load it into the RDP database. This parser, xml_parser2.plx, is located under McNair/usptoAssigneeData. Do not use the other parsers located in this directory - they are believed to be out-of-date.  Before parsing, look in the "patent" database and look for the most recent frameno and reelno so we don't copy over data that we already have in the database. Remove all files that have data we already loaded into the database before running the parser. current max reelno: 39068 (4/20/2017)current max frameno 9680 (4/20/2017) This will load the data into the database. Running USPTO_Assignee_Splitter.pl should have loaded the files into a directory of your choosing. Pass the path to the directory to the parser and it will parse the data and load it. Now to actually run the scripts: Open a command line or powershell e: cd .\McNair\usptoAssigneeData perl xml_parser2.plx name-of-directory-you-chose Notes:*The parser will open a connection to a database on the RDP's installation of postgres. It will then put the data directly into this database. Once complete. we manually move the tables to the dbase server's database (i.e. patent). *The password hint is ''tsn''. You can run pgAdmin III to connect to it. ====Cleaning up the USPTO Assignee Data==== Existing documentation that seems relevant to cleaning/moving the UPSTO Assignee data over from the RDP database:[[USPTO_Bulk_Data_Processing]][[PTO_Tables]][[USPTOAssigneesData]]
The xml_parser2.plx creates four tables: Assignment, Assignees, Assignors, and Properties. These appear to correspond to ptoassignment, ptoassignee, ptoassignor, and ptoproperty, respectively in the "patent" database. There is another pto table called "ptopatentfile" that has the following schema, but I cannot find out how it is populated; the xml_parser2.plx does not create this table.
As of 4/20/2017 is we think the correct parser is xml_parser2.plx in McNair/usptoAssigneeData Table "public.ptopatentfile" Column | Type | Modifiers | Storage | Stats target | Description reel_no | integer | | plain | | frame_no | integer | | plain | | action_key_code | character varying(10) | | extended | | uspto_transaction_date | date | | plain | | uspto_date_produced | date | | plain | | version | numeric | | main | |
Look in the "patent" database and look for the most recent frameno and reelno so we don't copy over data that we already have in the database.
current max reelno: 39068 (4/20/2017)
current max frameno 9680 (4/20/2017)
===For the USPTO Maintenance Fee Data===
Download the file manually from https://bulkdata.uspto.gov/data2/patent/maintenancefee/and replace existing place file in McNair/Patent Data/Maintenance Fee Data. Then, go into the file and add headers for each column separated by spaces (make sure that each header lines up with the start of the column. To see an example of how to do this, look at the example file MaintFeeEvents_20170410-wHeader.txt under McNair/Patent Data/Maintenance Fee Data.  Then, run the normalizer on the text file. To do this:  Open a command line or powershell e: cd .\McNair\PatentData/Maintenance Fee Data perl NormalizerFixedWidth.pl -file=MaintFeeEvents_20170410-wHeaders.txt Where "MaintFeeEvents_20170410-wHeaders" is the name of the file with the added headers at the top. This script will put the normalized (cleaned) file in MaintFeeEvents_20170410-wHeader-normal.txt (basically appends "-normal" to whatever file name you pass it). To then make a table out of the normalized text file, use the SQL detailed on [[Patent_Expiration_Rules]] This will create entirely new tables from the maintenance fee data. To avoid repeating data, we will most likely just replace the existing tables in the database with the new tables.
===For the USPTO Historical Patent Data===
Has not been updated since 2015, but if you need to download Historical Patent data this is the link: https://bulkdata.uspto.gov/data2/patent/historical/2014/
== Specifications of USPTO Assignment Data To Extract in Future Perl Scripts==
Go to https://bulkdata.uspto.gov/ to bulk data from USPTO.
* Some tables that will later be deleted were included on the spreadsheet because their are currently being tables built to replace them
* May try to just move all the (twenty-something) "pto-" tables that have been created due to the "Restructuring Patent Data" project from "patent" to the new database
* Will work on understanding SQL for filling new database from this link next week http://mcnair.bakerinstitute.org/wiki/[[Patent_Data_Processing_-_SQL_Steps ]] and http://mcnair.bakerinstitute.org/wiki/[[Patent_Data_Cleanup_-_June_2016(June_2016)]]
'''4/4/2017''' - Found all the pages on extracting data and making tables and databases
* Do not need to pull Harvard Dataverse data again, it's saved in CSV files on the bulk drive
* Started looking through DTDs for USPTO patent and assignment data to determine if there is extra information that we should extract from USPTO data.

Navigation menu