Parallel Enclosing Circle Algorithm

Jump to navigation Jump to search

Parallel Enclosing Circle Algorithm
Project logo 02.png
Project Information
Has title Parallel Enclosing Circle Algorithm
Has owner Oliver Chang
Has start date July 31, 2017
Has deadline date
Has project status Complete
Is dependent on Enclosing Circle Algorithm
Has sponsor McNair Center
Has project output Tool
Copyright © 2019 All Rights Reserved.

A thin-wrapper around the enclosing circle algorithm which allows for instance-level parallelization. This project consists of the python files in E:\McNair\Projects\OliverLovesCircles\src\python. There is another version of the project with plotting functionality that uses a slightly different approach (removes duplicate points and uses their counts before running the algorithm) in E:\McNair\Projects\KyranLovesCircles\src\python.

Parallelization is implemented via Python2's which is non-blocking and available in the standard library.

The Problem

Note that this is not the classical enclosing circle algorithm. Rather, we seek to minimize the sum of enclosing circles containing at least n points. Thus, multiple circles are allowed and inclusion in multiple circles is possible.

This algorithm has terrible time-performance characteristics, so we make the assumption that we can divide a large number of points with k-means and then solve those subproblems. In other words, we make the simplifying assumption that the Enclosing Circle Algorithm has Optimal Substructure.


  • in
    • PATH_SEPARATOR: the string that separates parts of the filename for both input and output files. For example, an input could look like "St. Louis#MO#2017#0.tsv" for PATH_SEPARATOR = '#'
    • ITERATIONS: the number of iterations to attempt for each k to find minimum for that k
    • MIN_POINTS_PER_CIRCLE (AKA n): the minimum number of data points that must be included in a circle
  • in
    • NUMBER_INSTANCES: number of parallel instances to run; assume no data-races between instances
    • SWEEP_CYCLE_SECONDS: amount of time before removing completed jobs from the current job and adding new jobs if any files are left to process
    • TIMEOUT_MINUTES: maximum running time of a parallel instance of the algorithm
    • SPLIT_THRESHOLD: if a dataset has more than this threshold of data points, it will be split via k-means
    • OUTJOINER_INSTANCE_PATH: the path to
    • DATA_DIRECTORY: the input directory
    • OUTPUT_DIRECTORY: the directory to write the outputs of to
    • GENERATE_REPORTS: whether or not to call (writes reports on the output of
    • REPORT_DIRECTORY: the directory to write reports to

Structure and Usage

  • What it does
    • If given a "master file" through argument infile, splits it into constituent data files, and stores them in DATA_DIRECTORY
    • Takes data files in DATA_DIRECTORY and calls in parallel for each of these data files, which writes its output files to OUTPUT_DIRECTORY
    • Takes output files in OUTPUT_DIRECTORY and calls, which writes its report files to REPORT_DIRECTORY
  • Command Line Arguments
    • --sweep-time overwrites SWEEP_CYCLE_SECONDS
    • --instances overwrites NUMBER_INSTANCES
    • --min_points overwrites MIN_POINTS_PER_CIRCLE
    • --infile: Path to large master file, e.g. CirclesTestData.txt
    • --split-out overwrites DATA_DIRECTORY
    • --out overwrites OUTPUT_DIRECTORY
    • --report overwrites REPORT_DIRECTORY

  • What it does
    • Called with two command line arguments, the input path and the output path
    • Calculates points and circles for input and writes it to output

  • What it does
    • Using a given output directory, generates three files: circles.tsv, points.tsv, and summary.tsv, and stores them in a given reports directory


  • The format of the filenames in this directory are {city}{sep}{state}{sep}{year}{sep}{num}.tsv where num is a 0-indexed integer of a split city/state/year infile that has greater than SPLIT_THRESHOLD.
  • These are files created when splits up a master file.


  • The format of the filenames in this directory are {city}{sep}{state}{sep}{year}{sep}{num}.tsv where num is a 0-indexed integer of a split city/state/year infile that has greater than SPLIT_THRESHOLD.
  • These are files created when processes a file from DATA_DIRECTORY.


  • There are three files in this directory: circles.tsv, points.tsv, and summary.tsv.

Example Usage

Splitting a master file and running

$ python --infile E:/McNair/Projects/OliverLovesCircles/CoLevelForCirclesNotRunGTE200.txt

where CoLevelForCirclesNotRunGTE200.txt is a tab-separated values file with the columns placestate, place, statecode, year, latitude, longitude, coname, datefirstinv, placens, geoid, city

This command will populate (and overwrite) any files in data/, out/, and reports/.

Running on already split files

$ python

This command will populate (and overwrite) any files in out/ and reports/.


  1. "St. Paul" and "St. Louis" have un-enclosed points--speculate because of weird file path issues
  2. Some place/state/year combinations do not run to completion regardless of how tractable the number of points
  3. How to merge small enclosing circles? This is a better measure of agglomeration regardless
  4. How to separate outliers?
  5. Sometimes circles with 0 radius are created
  6. enclosingcirclealg() returns None sometimes

Makeshift way to plot circles

  1. Connect to database with command psql -U postgres arc
  2. password is tabspaceenter I think
  3. \d lists tables
  4. Now run SQL script LoadCircles.sql in OliverLovesCircles
  5. Open ArcMap
  6. Add data -> Top of file tree -> Database connection -> localhost for instance, database arc -> connect to localhost and table testcirclegeom
  7. Add points from local files, make sure they are txt or tab files, not tsv, or they won't be found
  8. Points -> Properties -> Source -> Set data source -> x field: long, y field: lat

St. Louis bug

St louis bug.png

This image shows a rendering of the results of running St. Louis. There are four circles (the centers of circles are green dots), but two have radii of 0.0.

Progress on the bug

  1. Removing duplicate points from the data actually removes all of the errors, but this doesn't give you the solution with the smallest area.
  2. I tried removing duplicates but keeping track of a "count."
  3. I narrowed down the bug to the constrained_kmeans method in (paper here)
    1. For some reason, this returns clusters with smaller numbers of points than n
    2. This is a good overview of the algorithm
  4. I wrote a plotter, the plot method in

Related Pages

External Links