Jump to navigation Jump to search

Project logo 02.png
Project Information
Has title INBIA
Has owner Anne Freeman
Has start date
Has deadline date
Has project status Active
Dependent(s): Incubator Seed Data
Has sponsor McNair Center
Has project output Data, Tool
Copyright © 2019 All Rights Reserved.

Initial Review of INBIA

The International Business Innovation Association (INBIA) has a directory that contains information for 415 incubators within the United States. It provides reliable links to a secondary page within the INBIA domain. This page contains information including the incubator's name, address, a link to the home page of their website, and information for key contacts. The secondary pages have the same HTML structure and are reliable in the data they contain, making INBIA an ideal candidate for web crawling methods to collect data from the internal pages.

See Wiki Page Table for more details on source evaluations.

Retrieve URLS from INBIA Directory

We retrieved the INBIA data as follows:

  1. Go to and search US
  2. Change to 100 results per page
  3. Save HTML page of 0-100
  4. Choose next page, Save HTML page of 100-200
  5. Sort Z-A
  6. Save HTML page 418-318
  7. Choose next page, Save HTML page of 318-218
  8. Note that we are missing some that start with L and M
  9. Search US L, Choose page with L as first letter, Save HTML of L
  10. Search US M, Choose page with M as first letter, Save HTML of M

Then process each of those html files with regular expressions in textpad

  • Search .*biobubblekey Replace #
  • Search ^[^#].*\n Replace NOTHING
  • Search .*href=\" Replace NOTHING
  • Search <\/a> Replace NOTHING
  • Search \"> Replace \t

Then combine files, throw out duplicates, move columns, sort. This results in a file without headers where the lines are like:

1863 Ventures/Project 500	/?c=companyprofile&UserKey=4794e0a6-3f61-4357-a1cb-513baf00957e	
4th Sector Innovations	/?c=companyprofile&UserKey=cc47b04e-1c2a-4019-88b3-05d1163a0d6a	
712 Innovations	/?c=companyprofile&UserKey=531ad600-e11a-4c74-9f37-bace816b9325	
AccelerateHER	/?c=companyprofile&UserKey=3c05d1c1-91b5-48ae-8ec3-c77765b10c2b	
ACTION Innovation Network	/?c=companyprofile&UserKey=5ac08dd0-364d-47b2-8de0-a7536a3b4802	

We can now build a crawler to call with then the URL extension (either encoded or with & replaced with just &), for example: Gets the company page for Cambridge Innovation Center.

We can then rip out the contact information, including URL, and the people, using either beautiful soup or regular expressions.

Retrieve Data from URLs Generated

We wrote a web crawler that

  1. reads in the csv file containing the URLs to scrape into a pandas dataframe
  2. changes the urls by replacing ?c=companyprofile& with companyprofile? and appending the domain to each url
  3. opens each url and extracts information using element tree parser
  4. collects information from each url and stores it in a txt file

The crawler generates a tab separated text file called INBIA_data.txt containing [company_name, street_address, city, state, zipcode, country, website] and is populated by information from the 415 entries from the database.

The txt file and the python script ( are located in

E:\projects\Kauffman Incubator Project\01 Classify entrepreneurship ecosystem organizations\INBIA  

How to Run

The following script was coded in a virtualenv on a Mac, using Python 3.6.5 The following packages where loaded in that virtualenv

  • beautifulsoup4 4.7.1
  • certifi 2019.3.9
  • chardet 3.0.4
  • idna 2.8
  • numpy 1.16.2
  • pandas 0.24.2
  • pip 19.1.1
  • python-dateutil 2.8.0
  • pytz 2018.9
  • requests 2.21.0
  • setuptools 40.8.0
  • six 1.12.0
  • soupsieve 1.9
  • urllib3 1.24.1
  • wheel 0.33.1